Numinous Rationality
Numinous Rationality
AI Alignment and the Distributed Second Coming of Christ
1
0:00
-1:45:50

AI Alignment and the Distributed Second Coming of Christ

A podcast interview exploring my thinking over the last 5 years
1

Divia Eden and Ben Goldhaber of Mutuals interviewed me about what I’ve been thinking about over the last 5 years. Below is a lightly edited transcript of our conversation. (Link to audio on Youtube with clickable timestamps.)

Topics discussed include:

  • what religion and spirituality might have to offer moral philosophy, AI alignment, and AI coordination

  • Christopher Langan's theory-of-everything, the Cognitive-Theoretic Model of the Universe, and how it might provide coherent intellectual foundations for synthesizing the metaphysical claims found across religious and spiritual traditions

  • speculations about non-naive interpretations of "the afterlife"

  • my interpretation of the Second Coming of Christ as a potential self-fulfilling prophecy, along with speculations about what it might look like

  • what embodied cognition means to me, and why it's led me to have longer AGI timelines

Table of contents

Show Notes

[0:00] Introducing Alex

[1:51] What is metaphysics?

[2:54] Tenuous metaphysical assumptions behind the is-ought problem

[4:43] Alex's AI alignment journey

[8:30] Healing infant trauma – Alex's first formative spiritual experience

[10:25] The relevance of spiritual experiences for moral philosophy

[12:34] Convergent philosophical views among religious and spiritual traditions

[14:43] Alex’s take on Buddhism

[16:19] Mathematical formalizations of truth, goodness, and beauty might essentially coincide

[18:21] Alex’s take on Christ’s crucifixion

[20:14] Alex’s first "direct experience of God"

[22:23] Psychological distortions as a central problem in AI alignment and AI coordination

[25:18] A secular lens on spirituality – addressing psychological distortions

[28:53] Introducing Chris Langan and the CTMU

[29:41] How Alex got interested in the CTMU

[32:48] Alex attempts to summarize core ideas of the CTMU

[35:25] Alex’s ITT-passing habit

[37:15] On Chris Langan’s political views

[39:08] Metaethics from UDT and the CTMU, pt 1 – acting from behind the universal veil of ignorance

[40:03] Logical time and the "lazy evaluation" of reality

[41:31] Spirituality vs the orthogonality thesis

[46:19] Metaethics from UDT and the CTMU, pt 2 – elaborations on "ethics as self-interest"

[48:15] Speedrunning the AI "danger zone"?

[49:20] Metaethics from UDT and the CTMU, pt 3 – "we are all one"

[50:52] "Reincarnation" and "the afterlife"

[54:53] How might information get transferred across lifetimes?

[58:17] Is love a spandrel?

[59:18] "Reincarnation" and "souls"

[1:04:56] Karma

[1:06:47] Overt physicalist vs subtle physicalist vs non-physicalist explanations

[1:07:36] Cross-lightcone effects of prayer via influencing wave function collapse

[1:10:18] The physicalism null hypothesis, pt 1

[1:11:19] Cross-hemisphere remote healing with ayahuasca?

[1:12:13] Learning from "plant spirits"

[1:17:22] The physicalism null hypothesis, pt 2

[1:19:45] "The afterlife" as already happening, but occluded by psychological distortions

[1:23:37] The CTMU as an articulation of the metaphysical a priori

[1:24:54] CTMU vs Tegmark IV vs ultrafinitism

[1:26:19] The Distributed Second Coming as a self-fulfilling prophecy

[1:30:26] Synthesizing the world religions with each other, and with science

[1:31:35] AI coordination – the Second Coming as the prevailing of the "Schelling coalition"

[1:33:31] AI peacemakers, for empowering human peacemakers

[1:35:24] Cellular intelligence, "embodied cognition", and AI timelines

[1:42:05] Transformative AIs may not outcompete humans at everything

[1:43:04] Is the "AI" part of "AI alignment" a red herring?

[1:45:16] Closing

The Last Judgment by Michelangelo (1541) in the Sistine Chapel, Rome

Subscribe for upcoming pieces about related topics

Show notes

Double-crux: a conversational technique for arriving at mutual understanding

Paul Dirac on truth and beauty: “If one is working from the point of view of getting beauty into one's equation, ... one is on a sure line of progress.”

Eliezer’s LessWrong comment about a superintelligence stably believing that 51 is prime: https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq#:~:text=E.g.%3A%20Eliezer,ounce%20of%20understanding 

Reincarnation book that Alex recommended to Divia: https://www.amazon.com/Lifecycles-Reincarnation-Life-Christopher-Bache/dp/1557786453 

Universal Love, Said the Cactus Person: https://slatestarcodex.com/2015/04/21/universal-love-said-the-cactus-person/ 

Ayahuasca retreat centers where Alex "interfaced with plant spirits": https://templeofthewayoflight.org/ and https://niweraoxobo.com/

Metareligion as the Human Singularity: https://cosmosandhistory.org/index.php/journal/article/view/694 

Michael Levin interview excerpt about cellular robustness, from 1:25:15 -1:27:08:

[0:00] Introducing Alex

Divia: Today we're here with Alex Zhu, whom I've known for a long time. I think I met you almost a decade ago? 

Alex: That sounds about right. 

Divia: Yeah, Alex is currently working full time. You have an institute now, you just said? 

Alex: Yes. 

Divia: What's it called? 

Alex: The Mathematical Metaphysics Institute.

Divia: Okay, the Mathematical Metaphysics Institute, which makes sense, because Alex has been interested in AI safety for a long time, but then it seemed like part of where you ended up with that was that the insights that the different religions and spiritual people had seemed super relevant to AI safety.

Alex: That's right. 

Divia: And sort of diving into that, and… your views have evolved and changed, certainly over the time I've known you, and now you have an organization there.

Alex: That's right. 

Divia: And you certainly have a really strong background in math as well. I don't know to what extent you've been formalizing things recently, but I imagine that as always something that's on your mind. 

Alex: I was very good at math competitions. I am not very good at creating new formalisms, but I am good at learning existing ones and trying to synthesize them, and taking vague, handwavy ideas and expressing them in these formalisms. 

Divia: Cool. I’ve wanted to have you on the podcast for a long time, because I feel like we tend to have conversations that I find really interesting, and you haven't written a lot of it up, so I was hoping that maybe more people could hear some of what's been on your mind.

Alex: Thanks! I appreciate that. 

Divia: And we're here with Ben too. As a little bit of context for our listeners, I recently moved and had a baby, which is why we haven't had a podcast recently, but we're hoping to get back into putting them out somewhat more regularly than we were. 

Ben: Yeah, absolutely. And I'm very excited that we're chatting with Alex, who –  for context for our listeners – I know far less about his work than Divia does, which is also very exciting (A) for me to learn, and (B) for me to play the role of the audience when I get to ask stupid questions.

[1:51] What is metaphysics?

Ben: Like for instance, when we talk about metaphysics, I have some idea of what this might mean, but is there a good definition? How do you operate with this? 

Alex: I used to think it meant abstract nonsense philosophy that is fake and completely irrelevant for everything. I now think of it as – there are a bunch of very, very basic questions, like "What is an object? What is an observer? What is existence?" that are basically bottlenecking the other fields of philosophy, like epistemology and metaethics. And that there are real and important questions here that are also tractable. 

Ben: One of our previous guests, I think it was Ben Weinstein-Raun, talked a little bit about metaethics. And the way I think about this is like deciding which ethical framework you're going to follow. Is that what you mean by it, when you talk about how metaphysics interacts with metaethics? 

Alex: I was thinking of it more as, how do you ground ethics? If there are a bunch of different ethical frameworks, how do you pick one over the other? 

[2:54] Tenuous metaphysical assumptions behind the is-ought problem

Alex: Like, the is-ought problem is considered super fundamental. And the way I think about it now is: actually, our default intuitive conception of "is" has a bunch of metaphysical assumptions baked into it, that like, if you take them out, the is-ought problem as it naively appears isn't actually nearly as strong of a force as it might seem.

Divia: Can you name some of the assumptions? 

Alex: Yeah, like that there is such a thing as an "is" that's independent of any observer. 

Divia: Right, I see what you're saying. 

Alex: And so if you take the view that all "is"s are part of observation, and all observations are part of conscious beings, and all conscious beings by necessity have some implicit "ought"s that are related to their existence at all, then… 

Divia: Interesting. I like that. What I thought philosophers meant by the is-ought problem was that they didn't see a way to derive an "ought" from an "is". Like, you can say a bunch of things about "is", but then, how could that ever possibly imply anything about what's good? 

Ben: Ah, okay, okay. 

Alex: It's the strongest argument for moral anti-realism that I'm aware of. 

Divia: Yeah. And maybe we discussed this with Ben Weinstein-Raun on our podcast too, but I'm often not quite sure what people mean by moral realism and moral anti-realism, not because I haven't read the definitions and googled about it. I think I basically am familiar with the definitions, but I often get the sense when I have conversations with people that there's something I'm missing about what they mean, which kind of reminds me of what you're saying. 

Alex: I share that view, by the way. I cringe a little bit when I call myself a moral realist for basically this reason. Like, if someone asked "Do you believe in God?" I would cringe for a similar reason because there's just so much that's loaded in that word. 

Divia: Right. 

Ben: Right. 

[4:43] Alex's AI alignment journey

Divia: Okay. Can we take that as a jumping off point? I mentioned this in the introduction, but can you maybe say in your own words what your path has been in terms of caring about AI safety and then looking to different spiritual traditions for insights and how you've related to that, especially recently? 

Alex: Sure. When I was 14, I was thinking about what I wanted to do when I grew up. And I just listed a bunch of big ambitious things, and on that list was building superhumanly intelligent AI. And then I noticed that if I did that one, I could do all the other things on that list. That was when AI first became interesting to me. 

I think AI alignment first became interesting to me when I went to SPARC, the Summer Program for Applied Rationality and Cognition, which was basically my first deep immersion in the ideas of the rationality community and AI alignment. I think I always thought of it as probably one of the most important problems out there, and something worth dedicating your life to, but also pretty far away.

I went to SPARC in 2012 and I basically held this view until about 2017, when people I trusted were like, "No, actually, maybe it might not be far away. It might be coming in 10 years." And I was like, "Wait a minute. These are smart people I respect. And I should be at least trying to figure out whether they're full of shit, rather than just cordoning off what they're saying and just continuing to do what I'm doing." 

So that was when I went full time into exploring the world of AI alignment. I spent a lot of time then trying to think about AI timelines and double-crux with people who had long timelines and short timelines to try to really understand where people's models were coming from. 

And around this time I also went around trying to talk to all the leading AI safety researchers from the different research camps, like people at MIRI… I talked with Paul Christiano a lot, I talked with a bunch of the safety researchers at DeepMind… and a bunch of the people who were like, yeah, maybe we can actually get reasonable global coordination around AI, and here's how we're thinking about it. 

And I felt like I got up to speed with what people were thinking about around then. This was like 2017, 2018. And I basically walked away with the conclusion that no one was really directly addressing any of the actual biggest, thorniest questions underneath either technical AI safety or AI coordination. 

And, in parallel, I also started exploring, I don't even know how to describe it… "woo" / spirituality-type things…  

Divia: Like circling? 

Alex: Yeah. Circling, CT charting from Leverage… I talked to Leverage people about CT and learned… 

Divia: Yeah. For our listeners who may not know, this is called connection theory. Geoff Anders came up with this, I think before he started Leverage actually, but it's a theory of psychology he has. There are parts of it that I end up referencing myself, for sure. 

It includes such things as that people have a number of intrinsic goods that they care about for their own sake, and it's a constraint of the theory that people need to believe there's a path to achieving their intrinsic goods.

People could also look at some information online about it if they want to, but anyway, it's a psychological theory. 

Alex: Yeah. For me, the object-level details don't even really matter that much. The thing that I got most from it was watching how people who believed the theory thought in terms of it, and were able to come up with explanations for things, and help me understand myself in ways that I wasn't previously able to.

I felt like I picked up a bunch of valuable tacit implicit models from their tacit implicit models, that were wrapped in this package they called connection theory. 

But in any case, I was basically learning that psychology was a thing, and that I could refactor my psychology.

[8:30] Healing infant trauma – Alex's first formative spiritual experience

Alex: And… the first time I had a sense of, oh wait, maybe something in the reference class of spirituality might be crucial for thinking clearly about AI alignment and addressing the biggest problems there, was when I was working with a bodyworker, and I was just expressing to her that I felt small and I wanted to curl up into a little ball and cry. 

And she suggested that I do that, which I found very surprising, but then I did that and she sat next to me and started holding me like a baby. And as she was holding me, I basically felt like I was a baby again. When I access memories of being a kindergartner, I have a sense of what it's like to feel smaller – my limbs are smaller and stuff. In that particular moment, I had a sense of being really, really tiny, with really tiny limbs. Basically, no conscious thoughts at all. I was just like a reflex bag, and there was this deep, deep, deep sadness that I was carrying that was coming out, that I felt like was being released.

When I regained consciousness, I had the sense that there was this weight I'd been carrying on my shoulders my whole life that I was no longer carrying. And, not literally my whole life, but definitely at least for as long as I've had episodic memory. And so, it expanded my concept of what conscious experience could be at all. Like, I just discovered that there were new degrees of freedom in what consciousness could be at all. 

I also, in that same session, could suddenly feel my body a lot more, and suddenly just understood what people meant when they were like, "Alex, you're in your head all the time." And I'm like, "Ah, yeah, compared to what I'm feeling now, I was in my head all the time." And I also just started being able to emote more, and use hand gestures more, in social interactions afterwards.

[10:25] The relevance of spiritual experiences for moral philosophy

Alex: And so, the reason it felt relevant was because it seemed to me like any actual account of human values, or how to think clearly about ethics, would be incomplete if it didn't take into account that there were experiences like this.

I basically made the update of: people's endorsed values and ethical positions might rest on psychological wounds they've had before they developed episodic memory. And therefore, any complete account of what the good is, or what it is that humans ultimately value, must also take into account that there might be distortions of our judgments of those things that were laid in place from before we were conscious. 

And so I was like, wow, very interesting! This seemed like such a huge, massively important fact about the world that seemed basically almost completely unknown by most of the intellectual elite that I'd encountered before.

And then I remember talking to a meditation coach [Michael Taft], and I was just like, man, this just happened to me. And he was like, oh yeah, that's a thing. The Buddhists have known about this for thousands of years. They didn't call it infant trauma because they didn't have a concept of trauma. They thought of it more as like evil spirits leaving your body, but this is really what they were referring to.

And I was like, "Very interesting!" And then I just talked to other people in my circles who thought it was a thing, and they were like, yeah, that's a thing. And all of these people were really into spirituality and thought there was something to religion. 

I felt like I was graced with this surprising experience that almost nobody has – that, in particular, most of the intellectual elite don't have, that's clearly crucial for understanding a bunch of really important philosophical questions – that a bunch of people in spiritual traditions are familiar with, and furthermore, they tend to have, at least of the ones I talked to, fairly convergent philosophical views. 

And so I'm like, okay, that's really interesting! Let me try to understand them. And, maybe the solution to all the biggest questions of AI alignment and AI coordination are actually just sitting under our nose, but just not legible to most people who haven't had these kinds of bizarre experiences. 

[12:34] Convergent philosophical views among religious and spiritual traditions

Ben: What are some of these convergent philosophical views?

Alex: So, first I'll caveat that I'm filtering this through my understanding, and this isn't necessarily a good representation of what representatives from various different traditions actually think. But, one that I'd say is that the true and the good are actually the same thing, and to the extent that they [appear to] differ, it's actually because of psychological distortions we have [such as trapped priors, and ignorance of dependent origination].

Divia: Which is kind of the opposite of, like, you can't derive an ought from an is. 

Alex: Yes. 

Divia: So, having acknowledged that we think it's sort of a problematic term, that they were moral realists was a convergent philosophical position. 

Alex: Yes. There's a way in which every religion is kind of moral realist – Follow God! And the prophets tell you how to follow God. 

Divia: Right. 

Alex: And there's a thing where everything that is, is God's will, and what is true is what is, and what is is God's will, and therefore it's good. 

Ben: My immediate reaction to hearing this is thinking about the parallels to the famous physicists saying about, um, beauty and elegance being heuristics they use for figuring out whether or not a physics insight is true, or something around like… God I wish I remembered the actual quote, I'll find it and link it in the show notes, but like, that, and some deep appreciation for aesthetics, having insight or truth– 

Divia: Truth, beauty, and goodness, those are the three, right? Am I right that that's Plato? I'm not actually sure. In my head it is.  

Ben: I think so. Yeah. 

Divia: Yeah. And I mean, certainly what comes to mind for me there, and this is a topic that I have, I've been pretty interested in over the past few years, like in contrast to, presumably, the orthogonality thesis. Which is something that comes up – there's a weak version and a strong version – but basically the idea that an AI could sort of have whatever values and that we shouldn't really assume that there would be much relationship between its capabilities, and its intelligence, and what it cares about.

Alex: Right.

[14:43] Alex’s take on Buddhism

Divia: Okay, so there are a bunch of different directions… I don’t know, I think I get, talking to you I’m like, ah, there are so many interesting things we could talk about. But one of them is, since I've known you, you've then gone and tried to investigate a bunch of different religions, and talk to leaders and practitioners of these religions and draw out a bunch of different insights. And it seems like you think that there are a bunch of convergent things, and that there are particular strengths of different religious traditions. Is there anything that you can maybe say about insights you've gotten from Christianity, Islam, Buddhism… and how they relate to each other?

Alex: Sure. I kind of think of them as different articulations of the same core message tailored to different cultures and time periods. And so the kinds of things they emphasize are different across each. 

What I get most from Buddhism in particular is clear metaphysical views, metaphysical insight, and all their instructions on meditation. Buddhism as a religion is basically like, here are instructions for attaining mystical insight, and I'm like, yup, I'm really glad that exists. 

Divia: Have you meditated a lot yourself? 

Alex: Yes, but not that much in Buddhist traditions. But I have learned a lot talking to people from Buddhist traditions who really grok the metaphysical insights because they're part of a tradition that keeps it alive. I think the people I've talked to who seem to most deeply grok the metaphysical insights on an embodied level are hardcore Buddhists [like Soryu Forall from MAPLE].

Divia: Who have meditated a lot. 

Alex: Yes. 

Divia: Yeah, that makes sense. 

Alex: And they're part of a continuous lineage from however long back. 

[16:19] Mathematical formalizations of truth, goodness, and beauty might essentially coincide

Ben: Sorry, but I'm super curious about this now, as I'm trying to think through how I would relate to this… I guess I can think of things that I feel are beautiful, but not good, and I'm feeling like I'm probably missing something here, but I'm like, I don't know… some beautiful flower that's poisonous if I touch it, or something like that. This is such a naive take on my part, but can you just speak more on what you think of this relationship between true and good and beauty?

Alex: Yeah, I think my position is: if we were to ultimately understand what each of these concepts is really trying to point at, then we would see a convergence, but also, I think our everyday understandings and usages of these concepts are pretty far from what I'm calling the ultimate versions of these concepts.

Ben: Okay. And to tie this into your points on meditation, or something about Buddhists having it deep in themselves, is it some way in which when you sit with some of these ideas longer, they become less confused, and you're more able to orient to them correctly? 

Alex: Yes. Although the way I would put it is like, if we were to ground them in as non-confused an ontology as possible – if we found a mathematical theory of metaphysics that was rigorous, in the way that calculus was rigorous and formalized the field of natural philosophy – I think the concepts of true, beauty, and good within this mathematical metaphysics would essentially coincide. 

Divia: I don't know, my own take when I think about the flower that is beautiful but poisonous, is that there’s something that seems important about the context. Like I'm reminded of what Alex was saying about observers and their values, where I'm like, if what I saw is somebody about to eat the flower, is that still beautiful?

I don't know. I think I don't like it. But like if I saw it in some context where somebody got to look at it, but there was no danger, then it actually does seem more beautiful… also kind of a shallow treatment of the subject, but… 

[18:21] Alex’s take on Christ’s crucifixion

Alex: I do appreciate that concretization, and I think a pretty good segue to what I get from Christianity. There's a way in which Christ is like, "Yeah, me getting crucified? That's good. I'm being tortured to death because I'm being scapegoated and publicly humiliated. And you know what? That’s good. 

"I asked God [my deepest sense of truth and goodness] if there was any alternative to this, and God was like, nope, this is the thing you should be doing. And I was like, all right, I'm going to go do this and it's going to be really hard.

"But even while I'm doing it, I'm going to be radiating out in my consciousness that there's no resentment; I forgive everyone who's crucifying me. What's happening to me is a central example of what anyone would normally intuitively consider bad, evil, worthy of punishment, unworthy of acceptance or forgiveness. And I'm just going to totally upend that in my own consciousness, radiate that out to everyone in public, and have that reverberate for thousands of years for the rest of humanity."

That is my headcanon for what happened with Jesus. I do not actually know if he actually existed. Not a crux for me! 

Divia: This is maybe a response to a conversation Alex and I have had a number of times, where I'm like… but do we even know for sure there was a historical Jesus? And you're like, that's not the point. 

Alex: Yes. 

Divia: Whether he did this or not, there's something about the archetype. 

Alex: Yeah, I think there was definitely a meme that got created that is extremely powerful and captures extremely deep truths.1

Divia: Right. And the sort of narrative account as described in the Bible of it happened exactly like this isn't the point. You think that, clearly, because Christianity has had the impact that it has, there was something archetypal that that got in there. 

Alex: Yes. Jesus is an inspiration for me, especially for how I should show up in interpersonal relationships. 

[20:14] Alex’s first "direct experience of God"

Divia: You weren't raised with much religion, right? 

Alex: That's right. My parents were always like, this never made sense. I would talk to other kids at school, and I'd be like, this doesn't make much sense. At MIT, I would have interfaith dialogues where I'd be like, this doesn't make sense, here's why I think this doesn't make sense.

Divia: Interesting. I didn't know, I mean, I guess that tracks that you would have gone to the interfaith dialogues. That’s pretty cool to hear. 

Ben: Was there a singular moment for you where this changed and it started to make sense, or was it more gradual? 

Alex: In my… I think my second ever ayahuasca ceremony, I felt like I got a direct experience of God, whatever that means. What I can say was that it was awe-inspiring, and that when I looked at religious texts afterwards, their usage of the word "God" made a lot more sense to me. 

Divia: Yeah, it's, I like that you put it that way, because it's sort of tricky, from my perspective, to operationalize what it even means when people talk about believing in God or not. But that's something concrete, where like, you can read the sentences with "God" in them, you can be like, oh, there's something about that that makes sense, whereas before you were like… what?

Alex: Yeah, before I was like, I have no idea what you could possibly be even trying to say with this, besides Interventionist Sky Father, which is clearly fake.

Divia: Right. And yeah, and it's not so much that you've updated your position about the Interventionist Sky Father. 

Alex: Definitely not!  

Divia: But that, now you're like, okay, I see what they could mean. And why someone might write these things and expect other people to have some experience that's worthwhile reading them, something like that.

Alex: Right. When Jesus was like, you need to leave your family for me, it just reads as super narcissistic. But after the experience, I was able to understand God as this hybrid of true and good that… I don't understand yet, but have some sense that maybe there is some way to actually understand in principle.

And if I hear Jesus instead as saying, "You need to put truth and goodness above all your familial relationships", I'm like, oh yes, of course, that makes perfect sense! That's not narcissistic at all! That's just straightforwardly true. 

[22:23] Psychological distortions as a central problem in AI alignment and AI coordination

Divia: Yeah. And can you help tie this in again to how this relates to the AI stuff? Because I think I've heard you say a lot of things about, I don't know, the centrality of addressing cognitive distortions, and how spirituality from your perspective seems to be a lot about that. 

Alex: Right now, I think the central problem in both AI alignment and AI coordination is: if someone is very distorted about what they actually want, and doesn't want to admit it, and is willing to fight with all their force to not admit it, how do you relate in that situation? 

If an aligned AI can tell that their operator is acting from a deep psychological wound that they're covering up, that they're trying their best to not see, should the AI just go along with what the operator is doing, or should the AI actually help them recognize that they're misguided?

When I ask this question to mainstream AI alignment researchers, the answers I get are actually quite divided. A lot of them are like, the goal of the AI should be to satisfy the preferences and intentions of their operators, and if their apparent intention is to just continue with the distortion, then that's what the AI should do. 

And others are like, that seems bad. That seems theoretically difficult, in that it seems plausible that there might exist a theoretical technical solution to, like, how do you get the AI to help a human get true beliefs and work through their distortions? And less likely that there might exist a theoretical technical solution to, like, how do you build the AI to help them arbitrarily maintain their lies or self-distortions in the future?

And the thing I find most compelling: it wouldn't be sufficient to end the problems in the world if we built AIs that were aligned with people's psychological distortions without healing them, because I think these psychological distortions are basically what's driving Moloch right now, and if we're building AIs that are just amplifying them, then we’re just amplifying Moloch.

Divia: So, I mean, would you say something stronger? Like you think that wouldn't be enough to solve the world's problems, but do you also expect it to even be good? 

Alex: That feels a lot like asking, is having more powerful AIs around good? 

Divia: Yeah. Yeah. I'm curious for your view on that, for sure. 

Alex: I don't know. Many strong cases both ways. 

Divia: Sure. 

Alex: Like lots of people are like, yeah, maybe RLHF was bad. Because it accelerated capabilities, and I'm like, yeah, maybe you could say the same thing for intent alignment. But maybe it's good that capabilities are getting accelerated. 

Divia: Yeah, you think it's hard to say. 

Alex: Yeah. And I don't know how much I actually buy the argument about RLHF being bad because it accelerates capabilities, I'm just using it as a comparison. 

Divia: Sure. 

[25:18] A secular lens on spirituality – addressing psychological distortions

Alex: And so I think the question of, how do we relate with these deeply embedded psychological distortions that we try our utmost to not see, is sort of the central theme of what spirituality is all about, according to me. 

On this lens, the practice of spirituality can basically be understood from a totally secular lens of, like, hey, you have a psychology! It's got a bunch of distortions in it. There are things you can do to address these distortions, that will cause you to have more true beliefs, and be more the person you actually are! Maybe you should consider trying something like that! 

Ben: But you think that spirituality provides a more powerful or different frame than just a secular one, is what I'm picking up on here. There is more to this than just the secular conception of it as a useful mental trick.

Alex: Yes. In particular, I think that – again, caveating that when I say spirituality, this is Alex's steelman of spirituality that he endorses, and that there are many "spiritual" people who just drive me insane when I talk to them about how they think about things – 

Divia: And you're not necessarily saying, I don't know, you take somebody who has difficulty in their human relationships because of distortions (almost everybody I assume), and then they start going to their local church every Sunday. You expect, like, you're not necessarily saying, oh, well, that'll for sure fix it. 

Alex: Yeah, I'm definitely not saying that. 

Divia: You're saying something more like, you think the actual wisdom seems to be there in the lineage of a bunch of different religions, and the people that know it best seem to have some embodied understanding of that, that they are attempting to pass on. 

Alex: And that there's a rhyme and reason behind the kinds of things they say that are not present in Scientology, for example. And I think Leverage Research missed a lot of these things, and that's part of why they imploded, for example.

There's kind of a broad, high-level sense of what the end goal should be for clearing through psychological distortions, that I think is deep and subtle, and that I don't see from therapeutic traditions or cults like Scientology.

Divia: But it does seem convergent across the major religions that people would typically agree are religions. 

Alex: Yes, I think especially among the mystic practitioners of the religions. I think the mainstream versions of all the main religions are also missing a bunch of the important stuff. 

Divia: But the major religions all have mystical traditions within them that you think are more convergent. And in particular, convergent in terms of, like, how a mind ought to be? Or how a human mind ought to be? 

Alex: Yeah, although I would frame it more as what the most relaxed, desirable, natural state of the mind is. 

Divia: Okay. And, I think you've already done this, but if you could try again to say in your own words, what would that state of the mind be? 

Alex: Basically, just embodying true equals good. Like, everything that comes up in your experience, you don't resist. Including, for example, if there's something you find aversive, not resisting the aversion either. 

Divia: Yeah. Something in old… I think I got this from Michael Vassar many years ago… I think what he said was something like, I'm not supposed to have preferences over the current state of the world, only over future states of the world. 

Alex: I have never thought of it like that. I think that resonates. 

Divia: Okay. Maybe a little like that, maybe not. 

Alex: Yeah. 

[28:53] Introducing Chris Langan and the CTMU

Divia: Okay. So can I switch gears a little and ask you about something different, but related? Which is the Cognitive-Theoretic Model of the Universe. This is something else that we talked about some, I think I understand a little of it, but I don't really understand it, so yeah, can you tell the listeners what that is?

Alex: It's this theory of everything by this guy named Chris Langan, who was featured in the book Outliers by Malcolm Gladwell as an example of someone who's very, very, very smart, but didn't have an upper-class background, didn't learn the ways of the elite, and therefore didn't find much success within his lifetime. He's also been on the news a couple of times, billed as "America's smartest man!", "Man with the world's highest IQ!", or something like that. 

[29:41] How Alex got interested in the CTMU

Alex: And I was always curious about it when I heard it mentioned, and it never made any sense to me, until one day, three years ago, when I got high one night and looked at the website and noticed that some sentences made some amount of sense at all, and resonated with a bunch of the most inchoate, deepest, inarticulate metaphysical thoughts I had. 

Divia: Interesting. Do you happen to remember which sentences? 

Alex: I remember there was a particular diagram… 

Divia: I mean, it's a hard question, because you said most inchoate, so… 

Alex: Yeah. And Chris invented a lot of new terms and uses them liberally in ways that I now think are very defensible and very precise. But I still think they're extremely hard to understand for the uninitiated. 

I remembered after that night I got high, I shared the website with a couple of friends to see what they thought. And different friends would light up at different portions of the website. And, so, I got even more curious about it. 

And then I shared it with one particular friend2 who is much, much smarter than me technically, and who has read way more philosophy than me as well, and has explored esotericism much, much more than me. And he looked at it, and he was like, "Oh yeah, this generally makes sense. Like, I've seen a bunch of these ideas before. It sounds a lot like Neoplatonism." 

And I was like, huh, maybe my friend here can actually understand Chris's work directly. And I asked him to review Chris's work, and he did, and he said, "That was a very dense read. It was challenging to comprehend, but I comprehend it. It's legit. It seemed very insightful, and I'm glad I read it."

And then I just googled how to contact Chris, and found his Patreon, and then gave him a donation so I could schedule a call with him. And then me, Chris, and my friend talked, and Chris basically vetted that my friend seemed to get it. 

And then I was like… okay, this makes the CTMU even more interesting to me. 

Divia: Yeah, it sort of passed a bunch of checksums. Where the original round was like, a bunch of it sort of resonating with stuff that was important that you hadn't known how to say. The next round was like, your friends had similar reactions, including to parts that were not the same as each other or the same as yours. And then the final one was this guy that you respected a lot understanding it, including according to Chris. 

Alex: Yes. Over this time period, I would also look at other papers he wrote, and mostly not be able to understand any of it, but understand bits and pieces, and be like, "Oh, he seems exactly correct about these things in very deep and nuanced ways that I don't hear people talk about very much."

Divia: Do you have any examples? 

Alex: He was critiquing Bohm's pilot wave interpretation of quantum mechanics in exactly the right way. That was the most salient thing.3

[32:48] Alex attempts to summarize core ideas of the CTMU

Ben: And is there also a high-level overview of what this philosophy or belief is?

Alex: That's what I'm working on right now. I mean, if you go to http://hology.org, you are going to see Chris’s high-level overview, but it has a lot of words that he coined himself. And that was the thing I was initially looking at, where I was like, oh, this makes more than zero sense to me. 

In terms of something that can actually land with people in my circles, I currently don't have anything I can point people to. [Jessica Taylor has since published her review of the CTMU, which is the most approachable introduction to the CTMU I know of.]

Divia: And, maybe just your own short description? 

Alex: What I'm about to say is not going to be remotely close to a summary. It is going to try to describe some of the high level ideas. 

One is that it's fundamentally dual-aspect monist, in that it says that neither matter nor mind is primary, and it's more like there's a third thing that's not quite both, that gives rise to both, or can be interpreted in both ways, that's the actual true ontological primitive of reality. 

Another is… the idea of logical time, as Scott Garrabrant talks about it, is totally central to the CTMU. Chris calls it metatime rather than logical time. And that in the beginning of logical time, there was pure potentiality – "the Godhead", so to speak – and what creation is, is this pure potentiality evolving through logical time.

The CTMU also posits that conscious observers, like you or me, are sort of like holographic shards of the entire conscious entity that is reality progressing through logical time, and that what our consciousness is, is sort of our shards progressing through logical time. 

Divia: Yeah, that reminds me a lot of my – who knows if it even has any relationship to the real thing – understanding of the Hindu myth also.

Alex: Yes. One way I describe the CTMU is that it's the best synthesis I've encountered of all of the metaphysical claims across all the spiritual traditions. 

Divia: Is he also a scholar of religions? 

Alex: He is actually a genius according to me. And he knows a lot about religion. I ask him a lot about religion, and he’s like, "Yes, this is how you understand it in terms of the CTMU! Very straightforward. Those guys were smart and onto something, but I figured out how to fill in the details." And I'm like, "Wow, you actually did. Thanks, Chris. I appreciate you a lot."

[35:25] Alex’s ITT-passing habit

Divia: Yeah, cool. Another thing also that I think is a pretty strong thread in how you relate to things, which is sort of coming up here too, is that I think you… I don't know, rationalists talk about steelmanning a lot. Like, taking an idea and trying to imagine the strongest possible version of it. 

But I feel like you do it more than most people, in a different way from most people, and this is pretty central to what you're up to. Does that seem right? 

Alex: Yes. 

Divia: Want to say more about that? 

Alex: When I'm around other East Asians, I don't feel like I'm doing this atypically much. I think steelmanning is just actually a large part of East Asian intellectual culture. And even just social culture. 

Divia: So you think it's because you're Chinese, basically. 

Alex: And that I have a predilection for this, even among Chinese people. 

Divia: But it's more typical. 

Ben: In your experience, is the steelmanning that's done in Chinese culture explicit or implicit? Like, is it explicitly restating a belief in as strong as possible terms? Or do you feel like this is more of a norm that people have in how they react and relate to each other? 

Alex: It's more like, I think in East Asian cultures, people have way wider error bars on what other people mean. And so they are not fully willing to critique or dismiss a position before they can pass that person's ITT. Ideological Turing test.

Divia: Meaning, being able to state their belief back to the point where they're like, yes, that's what I meant. 

Alex: Yeah. 

Divia: Yeah. I appreciate very much the way you have, for example… like when you got interested in AI safety, you did go around and ask all the people what they thought about all the things, and try to make sense of that and synthesize.

Alex: Yeah. 

Divia: I think it's, I don't know. I certainly want more of it. 

[37:15] On Chris Langan’s political views

Divia: Also, just sort of, I don't know, to check a box or something… if people look up Chris Langan on Wikipedia, under his views section, they'll be like, ah, these are some far right views. The CTMU is obviously not about that, but… anything you want to just fill in for people that might be curious about that part of it? 

Alex: Yeah, I think people largely associate Chris with his political views now, which is… 

Divia: Yeah, I didn't actually realize this. I'd only heard of him in the context of talking to you, and I had read Outliers, but then I looked him up on Wikipedia. and I was like, oh, okay, that's there. 

Alex: Yeah, his political views are extremely offensive to the mainstream, and also I think not what he fundamentally cares most about. What he cares most about is people understanding his theory, but most people don't understand his theory, but they can understand his political views, and that's why he gets associated with it. 

I basically never talk with him about politics. Sometimes he expresses his views and then I ask for clarification for where he's coming from, and I'm like, oh, I get where you're coming from. I can empathize. I don't agree, but that's neither here nor there.4

When people bring up his political views, I often mention that Heidegger was a Nazi, and that doesn't mean his philosophy should just be dismissed outright. I don't think Chris is remotely a Nazi. In my personal interaction with him, he's been a wonderful person. He's been kind and generous with me, and I respect him personally. 

People also often ask: if someone has views like this, why should we trust their alleged spiritual insights? And my sense is that Chris is truly, earnestly doing the best he can to live in a way that's kind and compassionate to everybody, to live from behind the veil of ignorance as though everyone else's suffering and joy were his own. And he has the best theoretical explanation of why one ought to live this way out of anyone I've talked to. 

[39:08] Metaethics from UDT and the CTMU, pt 1 – acting from behind the universal veil of ignorance 

Ben: And can you say more on what this theoretical justification is? Or, I'm curious… I think I get a little bit more now of what you're pointing out with the philosophy, but how does this translate into this "why be good" question?

Alex: A short version is something like, we're all "children of God", in the sense that we're all logical descendants of "the one Godhead", and we should all act in the way we would want to act behind the veil of ignorance. And behind the veil of ignorance, we share an identity with everybody. 

And this bit about, like, we all ought to act the way we would act behind the veil of ignorance… I mean that, from certain updateless decision theory interpretations, it's actually in your best interest selfishly as an agent to act in that way.5 Not, like, you're being a good boy.

Divia: Separate morality juice. 

Alex: Yeah. There's not separate morality juice. 

[40:03] Logical time and the "lazy evaluation" of reality

Divia: Yeah. And do you wanna say a little more about what you mean by logical time? 

Alex: The best example is Newcomb’s paradox, I think. Which, by the way, Chris Langan wrote extensively about in the ‘80s or ‘90s. His analysis of it was like, to understand this, you need to be thinking in terms of simulations, and you need to have a different notion of time, which is basically MIRI’s take on it now.

I think one of the apparent paradoxes with the setup of Newcomb's Paradox is that somehow the contents of the opaque box are physically determined already – in temporal time, they're already there. You can't cause what was physically past to be different from what it is. 

But there's another sense, in which the contents of the box "come after" what your decision is. And this notion of "coming after" is my preferred pointer or gateway into the whole concept of logical time. 

Divia: That makes sense. So it’s like, "come after" in terms of, that it caused it.

Alex: Yeah, for some notion of causality that's different from the usual physical notion of causality. One analogy I found from David Bohm… he gave an analogy once about how reality can be thought of as being like a painting that gets filled in brushstroke by brushstroke, where the usual physicalist interpretation is like, you fill in one vertical row of pixels of the painting, and then from that, you get the next vertical row of pixels.6

And the order in which the painting gets filled in roughly corresponds to the kind of thing I mean when I'm talking about logical time, or metatime. 

Divia: And by the painting, you mean like, everything that we… 

Alex: Reality as a whole. Everything in reality. 

Divia: Yeah, there's definitely something about that that I find intuitive, though I also find it hard to put into words what I even mean by it.

Alex: Uh-huh. Chris has put it into words, but the words are hard to understand. 

Divia: I mean, it reminds me a little of like, lazy evaluation in programming. 

Alex: Yes, it does feel a lot like that. 

Ben: Mmm. Can you say a little bit about lazy evaluation? 

Divia: Oh, sure. I don't even remember… some of the functional languages do this, right? Like, Haskell does it? Where, if something is… 

Alex: Things only get computed once they're referenced. 

Divia: Yeah. And by reference… I mean the reference thing is sort of interesting. Like, referenced by what?

Because in the Haskell program, I think I understand it! You run the program in a way that's pretty easy to understand with temporal time, and it's going to produce some outputs, and then, that's what it means by "referenced". 

Whereas in this case, I'm like, yeah, until it's referenced. But then I'm like, well, what exactly do I mean by reference? I don’t know. 

Alex: Yeah. I’m also confused about this point. 

Divia: It also reminds me of like, there's moral realism and then there's mathematical realism too. But what exactly does that mean? But it seems important for sure. Like, and I think the thing there is like, do mathematical facts exist on their own, before something’s referencing them? And it seems like he's saying no, basically.

Alex: Yes. Or I mean, it's complicated and I intend to ask Chris about some nuanced questions there. 

Divia: That makes sense. 

Alex: Yeah. I mean, ultrafinitism is like, these extremely large numbers that we’re never actually going to be able to reference directly, maybe don't actually exist in some relevant sense of "exist"... is a position that I'm very open to and care about understanding better. It’s one of the things I intend to ask Chris about in a future call. 

[41:31] Spirituality vs the orthogonality thesis

Ben: One thing I wanted to jump back to is something Divia brought up at the start, which I feel like if I understood this topic better, I could make some great pun about logical time or time, whatever.

The orthogonality thesis. Which often is shorthand for, like, intelligence and values don't have to be aligned. Like, you can have very smart things that might end up having very different values than humans. And this is plausibly a problem in an AI alignment context. 

One of the things that I'm trying to predict here from what we're talking about is like, maybe there's some implication of… well, maybe actually the orthogonality thesis is wrong, that you would expect intelligent agents to be descended from this same Godhead. Is that true? 

Alex: The orthogonality thesis. My first thought is that just like markets can stay irrational longer than you might say solvent, I think intelligences can get powerful sooner than they become moral enough to realize that they shouldn't kill everyone. Even though there's a sense in which I no longer believe in a strong form of the orthogonality thesis, there's still a weak form that seems pretty real to me.

Basically, I would say like, yeah, it no longer seems plausible to me that a paperclip maximizer could tile the universe with paperclips thinking this is what it actually truly cared about, without first realizing that it was confused and that there's some morally real thing that should be done instead. But I do still think it might kill us all before it realizes this.

Divia: That it shouldn't have done that. 

Alex: Yes. That being said, like… an analogy I use is that I don't think a superintelligence could stably maintain the belief that 51 is prime. I think Eliezer… disagrees about this?

Divia: I think he said that in a LessWrong comment, right? 

Alex: Yes. And that was always extremely confusing to me. It just doesn't really make sense. It just doesn't really add up to me. You just can't fight truth for that long! 

Divia: My interpretation of what you're saying is that there would be some sort of complicated structure that would have to be in place for it to not notice, and then at some point you notice that structure.

Alex: Yeah. 

Ben: I'm confused by Eliezer’s comment there as well, because I remember Nate Soares of MIRI as well – they both work in the same org – had the comment that pointed out that this is a problem for a lot of alignment schemes to count on, like deceiving the AI in some way, or getting it not to notice ways that it might become more powerful, but that an AI that is intelligent would start to notice its confusion in some way they could route around. So… yeah, I see what you're pointing at, which is just that an intelligent agent of the type that we're talking about would notice the thing with 51. 

[46:19] Metaethics from UDT and the CTMU, pt 2 – elaborations on "ethics as self-interest"

Alex: Right. And I think they would likewise also notice that they ought to act behind the veil of ignorance for the benefit of all beings, to borrow some terminology from Buddhism.

Divia: And can you unpack that a little? There's something about it that makes intuitive sense to me, thinking about updateless decision theory, but can you try to make it more explicit what you mean by it being in their selfish self-interests? 

Alex: If you and I are agents and we can both recognize each other as the kinds of agents who would act behind the veil of ignorance, we would coordinate with each other better and selfishly benefit from that.

Divia: No, sorry. I feel like there's some component that's like, there's a practical fact of the matter, as far as I can tell, that human communication is pretty high-bandwidth. And so, in fact, in practical cases, we can't fully see each others’ source code, but people in my experience have highly imperfect, but pretty high-bandwidth, often correct impressions about things like, if I imagine you in some situation, what would you do?

Alex: Yeah. 

Divia: And that this does make it easier to coordinate, if, when I imagine those things, it comes up like, yeah, Alex would help me, maybe Alex can tell that I would help him, maybe we help each other, that sort of thing. 

But like, my impression is that you're saying something metaphysically stronger, that like, even if there were no practical opportunities for two agents to coordinate, there would be something…

Alex: I think it depends on what you mean by "practical opportunities for agents to coordinate". I think the same principle that causes me to be nice to ants, I think, would cause a superintelligence to be nice to us, which I think would cause a super-superintelligence to be nice to it.

Divia: Right. And so it's not that you expect the ants to be able to tell that you would help them, and therefore they would help you. 

Alex: Yeah, that's right. 

Divia: The ant's not gonna help you. Not in a straightforward way, anyway. It's not gonna bring you a piece of food or something. 

Alex: Yes. 

[48:15] Speedrunning the AI "danger zone"? 

Ben: So is one implication that you're excited or bullish about schemes that would push AI past the "danger zone"? Where it's intelligent enough to kill everyone, but not intelligent enough to know it should operate from timeless decision theory? I'm almost being a little tongue-in-cheek here, but I also do wonder if that is an implication. 

Alex: Sidestepping the danger zone might be more like how I think about it. 

Ben: That seems wise. Yeah. 

Divia: Yeah. Instead of speedrunning the danger zone. 

Ben: Speedrun the danger zone! We're just trying to figure out how to be an e/acc podcast after all. So like, wow, let's accelerate to wisdom! But sidestepping seems wise. 

Divia: More de/acc than e/acc. 

Ben: That's it. That's right. Vitalik, please sponsor the pod! 

Alex: I mean, I think if it's actually easy to figure out how to do the sidestepping, then e/acc. And if it's hard, then de/acc. 

Divia: Something like that. With the caveat that these are all more, like, vibes-based internet memes than coherent philosophical positions, as far as I can tell. Sorry, I especially mean that about e/acc. 

[49:20] Metaethics from UDT and the CTMU, pt 3 – "we are all one"

Divia: But yeah, I'm trying to better understand here… so you're saying something more like, you should be nice to the ants so that the AI will be nice to us, and less like, but maybe also sort of like, you just sort of also are the ants, and if you really understood metaphysics you would know that you are also the ants. Because people say that too, right? 

Alex: Yes. The latter feels more deeply true and the former feels like a downstream consequence. It lands to me more like "we're actually all the same consciousness" than it lands to me like "instrumentally be nice so that these other beings will be nice to you". 

And insofar as the latter is true, I think it's downstream of the former. The reason I described it in the latter way is because I was trying to make it more concrete from the lens of being an individual. 

Divia: No, that does make sense. Yeah, I guess it seems important to me because it seems like there's some… I don't know if they're edge cases or fake thought experiments, but, I mean, if Jesus is gonna die afterwards, it’s sort of not in his self-interest as commonly understood as like, you're an ego in a bag of skin, and then when you're dead, you're dead…

Alex: Another thing that I think is convergent across the spiritual traditions is that… 

Divia: You're not the ego in a bag of skin?

Alex: Yes. The thing that we think of as the self is not the ego in the bag of skin. It's not the mind and the body. Those things in fact end when you die. But you will find that that's never what you actually were in the first place. And for the thing that is what you actually were in the first place, that's where the real values live. 

[50:52] "Reincarnation" and "the afterlife"

Divia: Do you want to say more about… certainly, the afterlife is a big topic in spiritual traditions, and at face value, they seem to say some different things about them. 

Alex: Yes.

Divia: What do you think about that? 

Alex: My first thought is that "afterlife" and "God" are similarly charged for me, in that there are so many connotations people associate with them that are very different from how I think about them.

Divia: Right. Okay, so if we taboo "afterlife"…  

Alex: An analogy that I often hear is that, if our experience of reality were like a waking dream, we tend to identify as the protagonist of the dream rather than as the dreamer. And it's more correct to locate our identity with the dreamer. 

And the type signature of the dreamer isn't something that is emergent from physics. It's more like the aspects of physics that we experience are emergent within the dreamer. 

Divia: Which is similar to what Chris was saying about dual-aspect monism? 

Alex: Yeah, that's right. Chris has this term "distributed solipsism", which he says is what God / reality is. And you and I are components of this process of distributed solipsism.

Divia: Okay. 

Alex: And, Chris's models of what happens after you die are the best, most coherent ones I've encountered. There's a book on reincarnation that I've read that I have recommended to you, Divia.

Divia: Yeah. First you recommended one that wasn’t on Kindle. I didn't do it. But then this one where there was a Kindle version, I did read it. I thought it was pretty interesting, or I read the first third or so of it, and you said I got the basic idea. I could say a little more about it, but I did find it pretty interesting. 

Alex: Right. And so, there are a bunch of… 

Divia: Though it left me with some questions about what… yeah, anyway, you might be about to say that.

Alex: Yeah, so there are lots of accounts from people with near-death experiences. There are accounts from the Tibetan Book of the Dead about what the life-between-life realm is like. There are accounts of people who were literally in hypnotherapy and regressed to a past life and then asked to go into the light that you go to in the death transition and then describe what they saw there.7

And there's a lot of mutual consistency and rhyme and reason behind what's said there, such that when I first read about these accounts, I was like, wow, it sounds not totally crazy that they might actually be talking about something at all, as opposed to not something at all. That's pretty wild. 

Divia: Yeah. Though I think that the weak version for me of "something at all" is that there's some sort of powerful archetype that, when people manage to access their beliefs… like there's some shared, deep, implicit understanding of something… that doesn't necessarily mean that that's what actually happens. 

I could try to give my model of what I thought the book said, but then I think it only makes sense when I think of it in terms of logical time, more than temporal time. 

Alex: Mhm. Yes. 

Divia: Which I assume is a feature. But then, I feel like some people who have claims about reincarnation want to claim that it adds up in a more temporal time type of way… this is the part I feel most skeptical about.

Alex: Yeah, I'm skeptical of that too. I think Chris pretty explicitly to me was just like, that is a pretty naive understanding. 

Divia: Right, like, I don't expect to find any stories that really check out where, like, somebody was this other specific reincarnated person and they can, like, produce some sort of artifact, like they can read some ancient language or something. I don't expect to ever hear a story about that that seems true. 

Alex: Chris has accounts for how that kind of phenomenon could work. I think the typical interpretations or assumptions around it, I'm very skeptical of. 

Divia: Also like, and this is another thing we discussed in the past, but the rate of spiritual fraud is also pretty high in the world. So that's also part of what's going on. 

Alex: Yes. It would not surprise me if this all turned out to be fraud. 

[54:53] How might information get transferred across lifetimes? 

Divia: Okay, so can you talk me through how… like you know Scott Alexander's short story about DMT entities refusing to factor the prime number

Ben: Which we’ll definitely put in the show notes.

Divia: Yeah, please do. So can you give me your account of how they could actually, I don't know, know the ancient language or whatever it is, like some sort of thing they really couldn't have known, like concrete information that got passed through? 

Alex: The first thing is I expect that most of the concrete information is somehow encoded in low-fidelity to begin with. Like, I think Scott Alexander once had a musing of like, how did evolution tell us to be attracted to breasts or genitals or whatever? 

Divia: No, I have wondered about this. 

Alex: Yeah. But somehow they managed. I'm like, maybe there's some similar kind of thing going on with… 

Divia: Okay, so can I say an aside about that though?

Alex: Yeah. 

Divia: So, at one point it really stressed me out that, supposedly there's some thing where, like, we're more easily scared of snakes than spiders. I think I'm more skeptical about the spiders, but I believe it about the snakes. 

And I'm like, okay, but so like, is there some like JPEG in my DNA of a snake? What's going on here? Like, how is that encoded? I don't know, it bothers me that people seem to have not figured this out. 

But then my friend Andrew was like, well, look, there are some things that are easy to… his explanation, which made me relax about it somewhat, was like, the snake is one of the easier things to encode and that's why snakes get to be so poisonous. It's because we can easily encode them and know that they're scary, and so that's a part of why it all works. 

In the same way, it seems like it makes sense that bees are black and yellow, because… bright colors, high contrast, sort of fundamentally seems easy to encode. And he's like, yeah, it's the same with the snake. Anyway, that's my aside. 

Alex: That's a good point. 

Divia: And similarly, I think the thing with breasts is that they kind of look like eyes. Probably. But I don't know. I mean the obvious one is the peacock tail! They like it because it looks like a bunch of eyes, I think! 

Alex: My guess is that there are certain… it's not like there's a JPEG, but there are certain high-level features that get triggered when you visually process them, and those high-level features can be encoded pretty cleanly. It wouldn't surprise me if there were certain visual stimuli that gave people strong snake sensations, even if there wasn't a snake. Kind of like an adversarial example. 

Divia: Yeah, no, totally. Anyway, sorry, this is some aside about encodings of things. Okay, so you think it's possible that all of these cases are fraud, but what sort of things do you think could plausibly be encoded how? 

Alex: One picture that's been forming, talking with Chris a bunch… Chris describes reality as a self-simulation. He basically thinks, like, yeah, we are in a simulation, and our simulator is reality one logical time step earlier, and it goes like this all the way back to the beginning. There's a way in which the simulator needs to obey all the laws of physics, but there are a bunch of free parameters that it gets to fill in, in a way that is for the benefit of all beings, and this can affect how the wave function collapses. Chris thinks that quantum wave function collapse is pseudorandom and not actually random. 

Divia: And not many-worlds. That it does actually collapse. 

Alex: Yes. And… 

[58:17] Is love a spandrel?

Ben: Hey, I have questions on this, but unfortunately I'm going to need to jump off shortly. Sorry, this might derail some of the quantum, which I also have questions about, but I wanted to check in on something around the topic of spandrels… which has come up, I think Divia, in your conversation with Robin, around the question of, is something like love a spandrel? Or is it something that’s more etched in the fabric of the universe, such that superintelligent agents or other intelligent beings would discover similar conceptions of love or fun that we humans have? 

And, so, I want to check my understanding here that this is maybe one of the core beliefs that you have, or that's being influenced by your study of the esoteric and religion?

Alex: Is what one of my core beliefs? 

Ben: I'm sorry, core belief is probably a poor phrase there, but something around the belief that, yes, these values are not spandrels, that they are in the fabric of the universe. Something like moral realism. 

Alex: Yeah, the reason we have the values that we have is because we became sensitive to the fundamental structure of the universe and started trying to approximate it in certain ways.

[59:18] "Reincarnation" and "souls"

Divia: Okay, so Ben had to go for now, but Alex and I are going to keep talking about some stuff. So I'm pretty motivated to try to dig into the reincarnation in logical time and what that might mean. Though I do continue to be interested in what could be encoded, but maybe that's not the most interesting part of it.

And do you want to try to summarize the metaphysics of what people tend to convergently report under hypnosis or near death experiences about this going to the light bit and all of that, or do you want me to try to say what I remember from the book? 

Alex: Why don't we start with that, and then I'll see if there's anything that feels important for me to add.

Divia: Okay. If I'm trying my best to think in terms of logical time, there's some sort of process where the karma is kind of advancing from life to life. And so, they'll give an example of like, okay, a person lived this life, and then they die, and then their spirit guide will either come get them or they don't need a spirit guide, and they'll sort of review how they did in that life and what they can do better. 

And it could be one that's like, "Okay, cool. This is basically on track, and you were presented with these challenges and you did pretty well". Or it could be one… like, they gave some examples of somebody where the guide would be like, "Ah, you were supposed to maybe be a better person! What happened?" And not in a mean way, but in a "let's take stock of this because this is not the point of what we're doing here" type of way. 

There's one reading of it that's something about the continuity of individual experience that I don't know what to make of. That, I think I'm pretty skeptical of. 

Alex: Yeah. 

Divia: But something about… that there would be some phenomenology of the way that this karma kind of gets processed in at least a collective way, doesn't seem – I have some pretty panpsychist-type intuitions, I guess – and so in that sense, it doesn't seem necessarily super off to me.

That was the most important thing I think I got from it. What am I missing? 

Alex: I mean, there's talk about an intra-life realm at all, which sounds super wild on priors under the default metaphysics. 

Divia: They were meeting other people that had died, right?

Alex: Yes. Although, personal identity gets really strange when we start going to these places. Like, the sense in which you go into these realms, so to speak, is like… what's left of you after the body/mind/ego is stripped away? Most of us have almost no intuitive conception of what that is.

Divia: Well, I mean, that said, it seems like people in many times and places want to talk about souls, and want to describe souls as something that, among other things, persists after death. 

Alex: Right. 

Divia: This is a convergent idea in spiritual traditions, right? That there is something called a soul. Yeah, what would you say? 

Alex: I mean, Buddhism is like, there is no eternal unchanging soul. There is a mindstream, and that thing can reincarnate, that's also the storehouse of your karma. And I'm like, yeah, that's closer to my understanding of what's going on. But I mean, you can also think of it as a soul in some ways, I think. 

Divia: Well, yeah, so but if you didn't use the word "soul", how would you describe the thing?

Alex: I would probably just reference the Buddhist terminology for this, which I think is the most precise non-CTMU existing terminology. 

Divia: And how would you unpack it? I mean, I'm a little bit familiar. I've tried to look this stuff up sometimes, but I don't totally know what the Buddhists are trying to describe either.

Alex: I mean, they make a distinction between the mindstream, which I think is just the flow of consciousness through logical time, and a stable unchanging self, which in some sense their whole deal is about seeing through. 

Divia: You don't have a stable, unchanging self. 

Alex: That's right. Although I think there are some schools of Hinduism that are like, actually your true self is this mindstream type thing, which is different from your body/mind/ego thing. 

And so you should recognize your true self, lowercase self, to be this big uppercase Self, which is a totally different type signature from how you were thinking of things before. I understand it as different terminology to try to refer to the same general thing

Divia: I feel like the most mundane thing that I know how to say about this is, I think many people come to realize that, okay, what they care about is not just exactly their body and their conscious experience persisting, but things they care about persisting. Which is, I think, in some sense, actually pretty obvious. 

Like, me as a teenager who was trying to be consistent in a particular type of way, was like, wait, do I care about anything outside my conscious experience? I think many of us sort of ask this question. 

But it's like, well, yes. I think the answer to that is obviously yes, and there are things where almost anybody would be willing to trade off some amount of life for some amount of other thing they care about. Even people that claim to be as selfish as they come. 

And so in that sense, I'm like, okay, that's maybe a very elementary understanding of why I'm sort of wrong to just think about myself as caring about… like, I'm not really a hedonist or something like that. I don't only care about how pleasurable my moment-to-moment experience is. But I would personally like to better understand what you mean by the mindstream. 

Alex: Me too. That's one reason why I engage so much with Chris. This is one active area of confusion for me. 

Divia: Okay. Okay, well, let me try to articulate some of my questions better, and then you can see what you think. 

Alex: Sure. 

[1:04:56] Karma

Divia: So, the idea that karma is kind of a coherent concept is maybe one thing. I live my life in a particular way, or anyone lives their life in a particular way, and then the world is kind of different afterwards. And some of that stuff could be, I don't know, maybe some of it dies when they die, like that information wasn’t… 

My best guess is that it is best understood in pretty mundane ways of information transfer, like I live my life, but then maybe I write some of it down, maybe other people see me, and ultimately that information propagates, and so everybody who encounters me is a little bit different because they knew me.

And in that way, sort of, the plot advances. And that's what it means that there's this thing that happens where some karma was processed through my life. 

Alex: There's a lot of resonance there. The picture you're describing sounds very physicalist-compatible, in a way that I think is good. And also, I’ve been coming to think that if you actually want to understand karma completely and exactly, such that, for example, what goes around does in fact come around for the relevant zoomed out notion of personal identity that isn't localized to a single material lifetime, I think you need a non-physicalist understanding of karma for those kinds of things to work out.

I think what you described is part of the picture. One metaphor that I use, that I still wouldn't say I quite understand, is that there might not be obvious material traces of the life you lived left in the world, but "the simulator", so to speak, still remembers every single detail of what transpired in your life, and every single detail affects how the simulator decides how the rest of the simulation runs. This ties in with the quantum wave function collapse thing that I was gesturing at earlier. 

[1:06:47] Overt physicalist vs subtle physicalist vs non-physicalist explanations 

Divia: Yeah, and we were talking about this the other day, where in many ways it seems to add up to something similar, whether there's like, I don't know, maybe three different categories: legible physical effects, like the one where, I don't know, I wrote a book and some people read it. 

And then there are subtle physical effects, which many people seem to think are important in many cases. I can think of individual cases where they seem quite important, like maybe I entered some room and nobody said anything, but there was some subtle body language thing, there was an exchange, and now, something happened there.

And you're like, okay, but you think there's something that's neither of those, that's the thing you're gesturing at with "the simulator remembers". That you would not ever expect to see something that violated the laws of physics, but you think it's underconstrained. 

[1:07:36] Cross-lightcone effects of prayer via influencing wave function collapse

Alex: Yeah. One way I might put this is like, if you make a prayer for someone in a different light cone, God might "hear your prayer" and affect how things unfold in that different light cone.

Divia: If we try to specify what you mean by that… I think that's where I'm like, what does it even mean for me to know that there is somebody in another light cone? Like, from what stance is this even a thing, or something? 

Alex: Well, if you and someone started moving away from each other at both close to the speed of light, you would be in different light cones.

Divia: [...] But do they even exist anymore from my perspective, now that they're in a different light cone? 

Alex: I think so. I mean, for me intuitively, it feels very much like that. 

Divia: Yeah, I guess that's true. If I imagine someone getting on a spaceship, and I'm on a spaceship, then I'm like, alright, fine, I don't actually think of that person as gone. 

Alex: Yeah. I'm just like, yeah, physics, as I understand it, says we'll never interact again, which means we'll probably never interact again, directly, physically. 

Divia: Okay. So you think that you could pray for the person in the other light cone and it might make a difference?

Alex: Yes. 

Divia: So I, I think one thing you have said about how you came to believe this is that people you respect seem to take it seriously. 

Alex: For this one in particular, I'm just going off of Chris. 

Divia: Just Chris. Well, it is true that a lot of religious people I think also believe this. But in this case it's about what Chris thinks.

Alex: Yeah, I mean they also don't talk about light cones. The stuff religious people say I think can't be distinguished from the first two kinds of things you were saying, of both overt and subtle-but-still-physicalist. I never asked about light cones. 

Divia: Okay, if you want a potential physicalist-compatible explanation for those, it could be that there was some subtle interaction we had before we separated that meant that you knew that I was gonna pray for you, but you only knew it because of the time we did interact. And then I didn't pray until later in temporal time, but in logical time I'd already prayed and you could tell. 

Alex: Yeah, I think this is actually consistent with Chris's picture of what's going on. 

Divia: Okay, this one seems compatible with physics, leaving aside something extra happening with the wave function collapse… I think?

Alex: What's coming to mind now is that in Chris's models, how the wave function collapses is intimately tied with the wills and desires of conscious observers, which cannot be understood from a purely physicalist frame. And so, I think the way that it was already logically overdetermined that you would pray, would manifest as like, it was already logically determined that some wave functions would collapse a certain way. 

Divia: I have to think about that. 

[1:10:18] The physicalism null hypothesis, pt 1

Divia: It always occurs to me, like… what is my stake in whether it's physicalist or not? And it definitely interests me. I don't know, but it causes me to wonder. 

Alex: As a general side note, how to be appropriately skeptical and rational while engaging with all this other stuff has been a persistent question for me.

And I try to keep things as physicalist as possible as my null hypothesis. And I'm also open to physicalism as we understand it being not nearly as constraining as we think it is, or just straightforwardly being false. 

I feel like I don't understand either of those scenarios well enough to be like, I feel comfortable attributing any particular thing I'm aware of to those. It's more like, as I've gotten into this world, the weird shit I see just gets weirder and weirder. 

Divia: Anything you can easily share? 

Alex: I mean, there's the not super weird stuff, like energy healing appearing to have an effect. 

Divia: Right. 

[1:11:19] Cross-hemisphere remote healing with ayahuasca?

Alex: I think one of the weirdest things I've encountered is a friend of mine at an ayahuasca retreat in Peru having an experience in the DMT realm of his partner and her energy body, and him going into her energy body and cleansing something out in her heart, and then purging at the same time, as one does in ayahuasca ceremonies when you're letting something go.

And his partner was in the US, and he was in Peru. And later that night he got a text from her, being like, "I just had the weirdest experience. I felt this fish-like thing swim into my chest and clear out some energy there." And I'm just like… that's really weird! Remote healing is just part of the tradition of the indigenous healers that I’ve worked with there, and I still don't really know how to relate with it, but…8 

Divia: But it does seem like they're doing it.

Alex: … they've shattered my ontology enough already… 

[1:12:13] Learning from "plant spirits"

Alex: …the way they sing to me during ayahuasca ceremonies… these Shipibo healers [from the Temple of the Way of Light], in ceremony, sing individualized healing songs to each person, where, as they're singing, it sort of feels like they're doing surgery on your energy body.

Like, they're vibrating their vocal cords in just the right ways to resonate with the precise deep blockages in your system, that you didn't even have a concept of before. And then when you ask them how they do it, they're just like, "Oh, we're not really doing it. We're just channeling the plant spirits, and they're just telling us what to do." 

And I'm like, what the fuck does that mean?? So then I went and dieted with a plant spirit the way they do [at Niwe Rao Xobo], to commune with the plant spirits, and I felt the presence of something in my system. 

Divia: Wait, so when you say that, what does that mean? With a plant spirit? 

Alex: Each morning I would be drinking a solution with a plant in it. And during the ceremonies, the healers would be "connecting me to the spirit of the plant", and over the course of the retreat I would have vivid dreams of a particular flavor.

I would feel the presence of… in my normal layer of consciousness, like my normal "stack trace", it feels like there's another layer that's inserted in between somewhere, that causes everything to get filtered a certain way… all my thoughts feel like they get filtered a certain way, and biased in a particular direction, as opposed to how they normally would.

And there was one day when there was a writing prompt to journal from your plant spirit, and I felt like I was doing automatic writing. The stuff that was coming out was not something that ego/body/mind Alex would have been able to generate. 

There's nothing supernatural about this, but… 

Divia: It's weird though. Yeah. 

Alex: Yeah, it is weird! It's like, wow, maybe learning from the spirit of a plant is not a type error. I experienced it, so it's not a type error. 

Divia: Well, so, are you able to say what you learned from the plant? 

Alex: There’s the feeling of unconditional love that I had during MDMA, or something. I felt like it was sort of just propagating really deep in my system all throughout.

There's a sense that it sutured a bunch of emotional and relational wounds that I had with family and friends. Like I remember one day just waking up from a dream and just picturing my first girlfriend smiling widely, and me just feeling like, I'm fully healed from that breakup.

Divia: Huh. Okay, so, but, if I go with it, that it was the plants… I don't know, what is plant consciousness like? What sort of things do they know, and how do they know them? 

Alex: There's no voice. I described it as, like, a soft, subtle presence that was there in my mind that was filtering my thoughts to bias in a particular direction.

That was my direct experience of it. My sense is that there are certain aspects of their biochemistry, and how they convert energy into more of themselves, that can be transferred to… like, there are analogs of that in us, and that's the thing that we're directly learning. But, I'm just wildly speculating.

Divia: Yeah, interesting. I mean, it does seem like human consciousness is pretty complicated in a way that makes it more fragile. I mean, plants are complicated in certain ways, but it does seem like a more basic thing is going on, that I could imagine getting in touch with would be positive.

Alex: Yeah, I think there's basic stuff that plants do that lots of people aren't doing. 

Divia: Yeah, like even just like, okay, this has nutrients I need, therefore I will move in that direction. This is causing me to grow, so I will go more towards that. That, like, because humans are complicated, sometimes we end up doing more like the opposite of that. Does that seem sort of roughly right? 

Alex: Yeah. 

Divia: Okay. And your model is that the actual thing you were drinking was important. It wasn't like you imagining it being a plant was the most important part of it. 

Alex: Yes. An important thing worth noting is that people often describe it as, like, you're growing the energy of the plant inside of you. And as it's happening, a bunch of stuff is getting rejiggered in your system. Like, after this, we were told to adhere to pretty strict restrictions for about a month afterwards. 

Divia: Like, what's it like, behavioral, dietary? 

Alex: Diets, stuff like no sex or masturbation. And people who violate these restrictions often report feeling physiological and psychological consequences, in a way that has left me a little bit afraid of how deep along this path I personally want to go. 

Like when I hear about these consequences, like someone feeling like, yeah, I had to go to the hospital to get my kidneys checked out or something… I'm like, okay, something real is definitely happening! That much is clear to me. 

Divia: Oh. Yeah. Okay. 

Alex: Yeah. This was a tangent for like, yeah, transcript I've seen some weird stuff, including from the Shipibo healers, and remote healing is part of their tradition, and I'm still like… I don't know what to make of this, I still feel very skeptical, but also, after having my ontology shattered for what is remotely possible in the world enough times by this tradition… I'm just like… look, yeah… 

Divia: You take it seriously. 

Alex: Yeah. 

[1:17:22] The physicalism null hypothesis, pt 2

Alex: And this was part of a rabbit hole from, like, yeah, I try to stick with the physicalism null hypothesis as much as possible, and sometimes I just see really weird stuff, and I'm just like, I have no idea how to explain this under the physicalism null hypothesis. I would prefer to have the possibility of some alternative explanations than to just gaslight myself, and be like, well, that didn't actually happen. 

Divia: Right. It causes me to think about what's load-bearing about the physicalist stuff. Cause I think there's some of it that's like, okay, are other people going to think I'm crazy if I take this seriously? And that… I mean, it does matter to me, but that isn't really how I want to figure out what's true. That seems like that's not really about truth-seeking.

But then there's something else that's… maybe the way I would put it is something like, I have not personally seen any accounts that seem super credible that anything has happened that isn't compatible with physics. And if things like that were happening, then why wouldn't I have seen any? 

And people have answers to that, but for the most part, that seems pretty persuasive to me. But then I guess there's yet another thing, where I'm like, I get some sense of security about having some checksums, especially because people really do lie about stuff a lot. But as we've said, physics doesn't actually constrain a lot of things that people sort of often relate to it as though it does constrain. 

And even so, people could lie about plenty of things. It's interesting for me to notice that, because in fact people lying to me is a big problem regardless, and so I sort of have to have a bunch of strategies for that anyway. 

Alex: I mean, I feel like I basically held the same null hypothesis of, like, maybe everything can just be explained by physicalism. And then I had these very strange experiences, hear these very strange anecdotes, and I'm just like…  maybe that can be explained by physicalism! But I also… when they get weird enough, I start looking for possible alternative explanations. 

Divia: Yeah, that makes sense to me. The most compelling part about it is that we don't want to be preemptively gaslighting yourself. That seems obviously wrong. 

[1:19:45] "The afterlife" as already happening, but occluded by psychological distortions

Divia: Okay, let me back up a little. We got into this talking about karma and the wave function collapse. This was sort of an aside about some things. You were like, mostly I'm trusting Chris, and also, you have some experiences that make you wonder about this stuff. So, can I go back to the idea of the spirit realm, or the intra-life realm? 

So there are some things that, if I take them sort of at face value, like, okay, that you could then go talk to this person… again, it seems like it strains credibility if I imagine it in a personal identity sort of way, but then if I'm like, okay, but what could they mean by that, that there is some sort of communication between… this type of processing that's happening, and this type of processing that's happening… that doesn't seem obviously wrong. 

And then I'm like, okay, but would there be some phenomenology of it? Which, as I said, I have sort of vague panpsychist intuitions the way I think a lot of people end up with them. It's like, I don't really know what consciousness is, so maybe everything has some of it.

And then I can try to refine it a little, by being like, if things are modeling themselves, then that seems like maybe an important part of what it means to be aware, and this kind of loopy thing… something like that. And then I'm like, okay, but would these experiences have something like that? I don't know, do they? Do you have thoughts on any of these random ideas? 

Alex: Yeah, so the first thing is, I think Chris basically thinks there's reality as a whole, and then there's physical reality, which I think Chris sort of thinks of as the surface layer of reality as a whole, and he calls it the terminal realm, and he calls the nonterminal realm the place where all the real stuff is actually happening, which has the terminal realm as the surface layer.

Going with the simulator analogy, I think he describes the terminal realm as the display of the simulation, and the nonterminal realm as where the processing is actually happening. 

And I think Chris doesn't think about the afterlife as a place you go after you die. It's more like, when you die, you stop inhabiting the surface layer, and sort of rest back in the deeper layers where the stuff is actually happening, which is… 

Divia: But when you say when, I'm like… what do you mean by when?

Alex: At death, all the parts of your identity that weren't the parts that were already there and causing stuff to happen from that place the whole time get dropped… I think is closer to a more accurate way of thinking about stuff. I think even Catholics say that heaven and hell is like… it's not a place you go after you die, it's the state of your soul in relation to God, which is present even when you’re alive. I think that's a lot more like how Chris is thinking about it. 

And it's more like, any psychological distortions that are causing you to not be in touch with that fall away. And that's what you get in touch with in the process of death.

Divia: In the near-death experience, also. Because, empirically, in at least many cases, when people think they're gonna die, they do let go of a bunch of psychological distortions. 

Alex: In some sense, I think there's a root psychological distortion, which is like, I can't die. 

Divia: Fear of death. 

Alex: Yeah. And then, when you're directly confronted with death, it's like, oh, I guess there's no more point to all these other distortions… won't hurt to look at this point! 

Divia: And then you're like, okay, now I can see what I really am. 

Alex: Yes. 

Divia: So your model is that this is sort of always what's going on, that it's more real, and that in some sense it would be obvious to everyone, except for psychological distortions, which are downstream of fear of death, or like, inability to comprehend death, or something like that. And that's why people can sometimes speak about it, because it's not impossible to let go of those psychological distortions while still alive. 

Alex: Or to pierce past the veil, so that they… 

Divia: Temporarily see something. Yeah. Okay. 

[1:23:37] The CTMU as an articulation of the metaphysical a priori

Alex: On that note, one way I think about what the CTMU is trying to be at a type level, like… the anthropic principle tells us that, on the one hand, the fact that the physical constants seem fine-tuned seems kind of surprising, but on the other hand, it's an a priori necessity for us to even be wondering about this question, and from that perspective, it's not surprising.

I sort of think of the CTMU as answering the question of, what must be metaphysically true a priori in order to support the existence of observers like us in a world that is like the one that we are in? 

Divia: I think I don't quite follow. 

Alex: I think I'm just imagining someone asking, "Why should we think this is how reality works? It sounds like you're painting a pretty specific picture of how reality works. What's your evidence of this? Why are these not just a bunch of random details that are being strung together?"

And my understanding of Chris's understanding is that this picture of reality is actually the metaphysical a priori. It has Kolmogorov complexity zero. Given that we exist as observers of the world like this, if we strip away all of our psychological distortions and metaphysical confusions, we see that it actually has to work kind of like this by a priori logical necessity. 

[1:24:54] CTMU vs Tegmark IV vs ultrafinitism

Divia: Okay, maybe this is a dumb question, but can you compare and contrast with, like, a Tegmark IV understanding?

Alex: Tegmark IV treats all mathematical objects as kind of platonically existing. Including all the natural numbers, and an ultrafinitist would object to that. And I think for a good reason. And when you add in how much these structures exist… 

Divia: Yeah, that's where I was going to go with this too. This seems like a big question about everything existing… it seems like surely some things must exist more than other things. 

Alex: Yeah. I think the way Chris thinks about it, all things exist as potentiality in the pure potentiality, no constraints, Godhead thing. But actual objective existence, and the consciousness that perceives the existence of these objects, must in general co-arise. This is the dual-aspect monism part. 

Divia: Wait, sorry, can you say that one more time? 

Alex: Any object that can exist objectively co-arises with the thing that perceives it. And so, Tegmark IV in some sense does exist, but as pure potentiality. And the aspects of it that get actualized somehow depends on who the observer is, and what they're paying attention to, and why they pay attention to that, and so on and so forth. And the dynamics of that are a lot of what the CTMU is about. 

Divia: Yeah, definitely. I don't know. When you say it that way, it seems timeless in a different way from how it already seemed or something. Interesting. 

[1:26:19] The Distributed Second Coming as a self-fulfilling prophecy

Divia: Okay, you have some more time, but I just want to make sure, are there any other things that I should have asked you about that I have not yet asked you about?

Alex: The Second Coming of Christ? 

Divia: Yes, that was on my mental list. So, before, when you were talking about how you think that in the main religions, there are sort of convergent mystical traditions that seem to see the same truths, but the religions themselves are sort of about the time and place and the people that they're trying to speak to…

Alex: Mm-hmm. 

Divia: With the Second Coming of Christ, is this sort of like trying to bring the Christ-consciousness to the current context more? Is that what that's about? 

Alex: That's more or less how I think about it. The idea I'm trying to point at is not specific to Christianity, although I think it is consistent with the Catholic account of the Second Coming of Christ.

Divia: What is the Catholic account? I'm not familiar. 

Alex: It's not that well-specified. I just remembered looking on the Wikipedia page and being like, oh, this all sounds surprisingly reasonable. 

There's this guy, Pierre Teilhard de Chardin, who has this theory of the Omega Point as the culmination of spiritual evolution. Errr, of evolution.

Divia: Is he a Catholic? 

Alex: Yeah. And his conception of the Second Coming of Christ was as a culmination of evolution, which… where like, evolution is now in its phase, through us, of spiritual evolution. And when we collectively spiritually evolve, basically to the point where Christ-consciousness descends upon us all, that’s what he calls the Omega Point. That's how he thinks of the Second Coming of Christ. And he was considered heterodox when he was presenting his ideas, by the Catholic Church, but now he's kind of just accepted and respected.

Divia: Interesting, okay. And how do you relate to this? 

Alex: There is a quote by Thich Nhat Hanh, who is a Buddhist teacher, who passed away recently, who Martin Luther King nominated for a Nobel Peace Prize.

Divia: Oh, I did not know that. 

Alex: Yes! And he said something about the next Buddha, which I think resonates a lot.

Divia: Same thing. 

Alex: Yeah. He says, it is possible the next Buddha will not take the form of an individual. The next Buddha may take the form of a community, a community practicing understanding and loving kindness, a community practicing mindful living. And the practice can be carried out as a group, as a city, as a nation. 

We know that in the spirit of the Lotus Sutra, we are all [students] of the Buddha, no matter what tradition we find ourselves in. We should extend that spirit to other traditions that are not called Buddhist. We can find the jewels in other traditions — the equivalent of the Buddha, the dharma, and the sangha. Once you're capable of seeing the jewels in other spiritual traditions, you'll be working together for the goals of peace and brotherhood. 

And… "Second Coming of Christ" in popular imagination tends to connote, like, Jesus Christ reincarnates, and… 

Divia: Yeah, there's gonna be like an individual guy. 

Alex: Yeah, and he makes everything good… somehow. Which, to me, seems about as plausible as Bearded Sky Father Interventionist. 

Divia: Right. So, do you think something like this is going to happen? 

Alex: Before I get there… I think of it more as a distributed Second Coming of Christ-consciousness. I think of Christ-consciousness and Buddha-consciousness as not literally the same, but morally equivalent for the context of what I'm trying to talk about right now.

Divia: And it's basically being able to see through… 

Alex: The veil of psychological distortions, yeah. And I think of it less as something that's definitely going to happen, and more as a possible self-fulfilling prophecy, where if we get our shit together enough, then this is going to happen, but also, whether we get our shit together enough might depend on whether we believe this happens. 

Just like if you want your company to succeed, you have to believe that it's going to succeed. 

Divia: I think there's some nuance there. 

Alex: Yeah, yeah.

Divia: But sure, yeah. Which is presumably part of why you want to talk about it. 

Alex: Yes. 

Divia: Yeah. I think I maybe heard you say that part of your life's work is to try to make this happen more. Does that seem right? 

[1:30:26] Synthesizing the world religions with each other, and with science 

Alex: Yes. I think that, in fact, the world religions can be united in a meaningful way. The thing that captures the synthesis [of the religions]… I think the median current Christian or median current Muslim will look at that, and be like, that's not my religion. Also, the median current atheist would look at that, and be like, that seems wrong. 

Like, the synthesis of all the religions and atheism, I think is a thing, and it’s real, and I think the CTMU captures a lot of it. 

Divia: Okay. And, you're saying that part of what you’re actively working on is trying to translate the CTMU into something more accessible? 

Alex: Yes. Something that scientists and every world religion can understand. Or at least the intellectually sophisticated representatives. 

Divia: Are you with Chris on this? 

Alex: Pretty much, yes. Chris has an actually understandable… relatively understandable paper called "Metareligion as the Human Singularity", which is basically about exactly this. 

Divia: Okay, we can try to link that in the show notes, also.

Alex: Okay, yeah. This is basically how I currently think about AI coordination. 

Divia: Yeah, I was gonna say, this is what you think we need to do about AI. 

[1:31:35] AI coordination – the Second Coming as the prevailing of the "Schelling coalition"

Alex: Yeah. I think psychological distortions are going to prevent meaningful peace from happening in the world. People are gonna double down on their wounds, and be like, "We should be the ones who have the most power!", and that's just gonna escalate. Rather than being like, "Oh, maybe more power isn't actually the thing that we want in the first place." 

There's a quote that's popularly attributed to Jimi Hendrix: "When the power of love overcomes the love of power, the world will know peace," and I think that resonates completely with how I'm imagining stuff. 

Divia: Yeah! Do you have more of a vision of how this is gonna play out? It seems like part of this routes through, like, maybe a shared conceptual understanding of the way things really are. 

Alex: Yes. I'm basically imagining the Rosetta Stone of the religions and science being the centerpiece of a "Schelling coalition" of… the power of love, basically. And like, when I think of the Second Coming of Christ succeeding, I'm basically thinking of the Schelling coalition overcoming all the forces that are opposing it. And the overcoming might be, like, inviting and including them in. 

And when I think about AI alignment from this perspective, the main thing I'm thinking about is, how can we build technology that differentially empowers the Schelling Coalition? For me, some central questions now are, how can you build a social network that promotes what's true and good, as opposed to what grabs attention?

Divia: So yes, a social network that would differentially promote things that are true and good, basically? 

Alex: Yes. And I think in order to do that, we would need enough of a technical understanding of what these concepts are such that we can actually build it. 

Divia: Are you at all bullish on using technology to help people meditate better?

Alex: Seems helpful. It seems like it might be a nontrivial piece of the puzzle. 

Divia: Not particularly where your focus is. 

Alex: Yeah, that's right. 

[1:33:31] AI peacemakers, for empowering human peacemakers 

Divia: Got it. Are there other things that you're particularly focused on as far as differential tech to empower the Schelling coalition? 

Alex: AI chatbots that are actually good at conflict resolution, and genuinely helping people. 

Divia: Presumably, I assume you come down on the side of: yes, the AI should in fact help… should not do what people say if it's obviously not what's actually good for them. 

Alex: Right, like AIs that help people overcome their psychological distortions, while also not being like L. Ron Hubbard. 

Divia: Right. Yes. And that would be an interesting case study in how exactly that happened. I don't know a ton about the history of Scientology, but okay. 

Alex: I mean… AI girlfriends who can cybersex with you feel more like they're in the L. Ron Hubbard territory. 

Divia: Yeah, you're not bullish on those. 

Alex: Definitely not in their current formulations. It's plausible to me that there's some version of those that could help reach a broad audience in the right way or something, but that path seems very fraught. 

Divia: Well, certainly mainstream religions tend to be down on that sort of way of reaching people, right? And it seems to be more popular among the things people want to call cults. So if we go with that heuristic, it doesn't seem super promising. 

Alex: Yes. 

Divia: Is a decent framing of it that you want… AI priests to help people? 

Alex: AI peacemakers is actually how I'm currently framing things. To empower the human peacemakers. 

Alex: Right now I'm picturing it less as like… like, I think if there’s like… if deep neural nets are going to become coherent agentic long-term planners, I'm like, okay, we're probably just fucked. 

Divia: Yeah. 

Alex: And also, I don't think that's very likely. 

Divia: You think they're not going to?

Alex: Yeah, I think they're not going to. [And, therefore, I think most of the peacemaking is going to originate from humans, not AIs – hence, the emphasis on empowering human peacemakers.]

[1:35:24] Cellular intelligence, "embodied cognition", and AI timelines

Divia: That's one I actually haven't asked you about. Do you have any thoughts on AI timelines and different scenarios, or anything like that you want to share? 

Alex: Yeah. Back when I was double-cruxing people about timelines, there was one guy in particular who had long timelines, my friend Gary Basin. And I did not understand his views back then, but now I basically just agree with everything he said back then. 

Divia: Are they written up somewhere? 

Alex: I don't think so. But after double-cruxing with a friend at length, the thing I've come upon is, I now think that our higher level cognition is meaningfully built on top of stuff happening on the cellular level.

Divia: Cellular. Okay. Can you say more about that? I was expecting you to say something like "embodied", and I'd be like, yeah. And now… what do you mean by cellular? 

Alex: I mean, cells are really good at being robust and adaptable. Like, you can put them in environments pretty different from what they're supposed to be in, and they somehow adapt in a way that seems super foreign to someone working in ML. 

Divia: Are you talking about stem cells or something? Can you give a concrete example? 

Alex: Michael Levin had an example in a Lex Friedman podcast. [Video link, 1:25:15 -1:27:08] I can try fishing out the quote, but I don't remember the details.9 

Divia: But it wasn't a stem cell. It was some other type of cell. 

Alex: Yeah, it was a non-stem cell being placed in a situation that's different from what it was supposed to be, and it somehow just adapted in the right way. 

Divia: This sort of reminds me of what you're saying about the plant.

Alex: Yes. 

Divia: Because I’m like, okay, if I go with the plant having some sort of consciousness, then I can see why it would have something to teach me. But then again, there's this question of like… I'm drinking the plant… how am I getting the consciousness from drinking the plant? 

Alex: For that point, I think if it's just you drinking the plant, you might not get that much. In the ayahuasca ceremonies in the evenings, the healers are allegedly directly connecting you with the energies of the plants, and that's how you form the connection with them. That's what they say, I don't know what that actually means.

Divia: Anyway. I don't know. There may not be any there there, but what you're saying about the cells being very adaptable, and how you think that might be load-bearing for higher cognition… it seems intuitively related to why drinking the plant in the right context might actually… 

Alex: Right, totally. And my experience with drinking the plants is part of what's updated my intuitions in this way. 

Divia: You know what’s funny? I ran into an old classmate of mine many years ago, and she was talking about eating consciousness of plants. Makes me rethink that whole conversation a little bit. She had been doing some sort of raw diet that she felt like she'd learned a lot from. Hard to know in any particular case, but… 

But okay. So, Gary Basin thinks that one reason it will be hard for the neural nets to replicate the sort of agentic behavior that humans have is because they're not cells? 

Alex: This is my current gloss. I don't know what Gary Basin thinks. 

Divia: Never mind what Gary Basin said. 

Alex: Gary did tell me that paramecia are capable of doing a fair bit of learning and stuff. And that seemed like a point of evidence. 

Divia: Slime molds, too. 

Alex: Yeah. And, like, at first I was like, okay, what is this relevant for? But I feel like I now better parse the kind of point he was trying to make with that. He did talk a lot about embodiment, and… 

Divia: But maybe embodiment has more to do with cells than I might have thought? 

Alex: Yeah, I think embodiment is kind of a red herring for what it connotes, because people are often like, well, if you have a robot, does that make it now embodied? And I'm like, no, that actually misses the point completely. And then they're like, but how does it miss the point? 

Divia: …completely?

Alex: Okay, maybe like 80%. 

Divia: Yeah, I mean, certainly I don't look at my Roomba, and I'm like, yeah, that– 

Alex: It's like, ah, because you have a physical body, it’s– 

Divia: But if I imagine a really sophisticated physical body, I don't know. So can you help unpack why you think even a pretty sophisticated physical body that's still, like… 

Alex: Well, I think the cruxy bit of the sophisticated physical body is that it was built iteratively out of simpler parts…

Divia: Oh… 

Alex: …which in turn were built iteratively out of simpler parts, like Matryoshka dolls, where the smallest ones are cells. 

Divia: There's some law about that, right? That, like, the only way to have a complex system that actually works is to have it evolved out of simpler systems that worked? 

Alex: That is the kind of thing I'm trying to gesture at. 

Divia: Interesting. Okay. 

Alex: For what it's worth, Chris Langan was the person who first communicated the general idea of this to me, that I then hashed out with another friend [Ashwin Sah] to come up with my current articulation. So, Chris gets intellectual credit for how I'm thinking about this. 

Divia: Okay. We haven't even talked about Ken Wilber on this podcast, but it definitely starts to remind me more of his worldview – that the problem with the robot is that it's not made of things that are made of things in the same way, sort of alive all the way down, or something like that. 

Alex: Yeah. And I do think that in principle, you could get an AI to replicate what's going on at one at these low levels. 

Divia: Like you're a functionalist? Is that sort of what you're saying? 

Alex: At least for intelligence, if not consciousness. 

Divia: If nothing else, you could just run a simulation of cells, right?

Alex: Yes. Although I don't think that would be efficient. 

Divia: No, it doesn't sound efficient at all. Just as a proof of concept. 

Alex: Yes. 

Divia: And then you think in practice, you could do that, but somewhat more efficient? 

Alex: Also in principle. I think I'm more laying out why I don't think you need a literal physical body in order to be intelligent in the way we are.

Divia: Like you can be made out of silicon. 

Alex: Yeah, or the silicon can be simulating what's happening at a somewhat low level in us, and try to rebuild all the higher stuff on top of that. And that could work, but that sounds really hard also. 

Divia: And it's not mostly what people are doing. 

Alex: Yeah, it's completely not what most people are doing.

Divia: Yeah. Okay, so you have pretty long timelines then, overall, is that right? 

Alex: There's a funny thing where, like, I think the doomers’ epistemic state these days is a lot like what mine was like six years ago, and now mine is more like what Gary Basin's was six years ago. Yeah, I think timelines for crazy fucking AIs are pretty short, but timelines for… 

Divia: Like narrow AIs? Or… 

Alex: …pretty general and crazy and "transformative" AIs might be pretty short. 

Divia: Like how short? 

[1:42:05] Transformative AIs may not outcompete humans at everything

Alex: Well, I don't actually know what "transformative" AI actually really means. I mean, Holden talks about when you can automate science and technology research as a particularly interesting bar. 

Divia: Yeah, do you think, when do you think…?

Alex: On my current inside view, it's not ever going to get fully automated. I think human-AI teams are going to vastly outcompete individual humans, but it's hard for me to picture a world where AIs are full-stop just outcompeting AI-human teams, in general. 

Like, for any specific narrow domain that you pick out, I think it can happen, but in general, it seems kind of implausible to me. 

Divia: In general includes for science and technology research. 

Alex: That's right. But on the other hand, maybe narrow AIs can still accelerate science and technology research by many orders of magnitude. And that would still be with… 

Divia: Humans in the mix, but that is transformative. 

Alex: Yeah. 

Divia: Yeah. OK. Let's see what else we should cover in wrapping up. Anything else we missed? 

[1:43:04] Is the "AI" part of "AI alignment" a red herring?

Alex: Yeah, I've been updating recently toward… the "AI" part of AI alignment is actually kind of a red herring, and there's a general… 

Divia: There’s room for alignment in general. 

Alex: Yeah. Of complex systems in particular, such as us, and which AIs are going to be as well. Which feels like the more relevant level of abstraction at which to be thinking. 

And, yeah, AI interpretability – good! Obviously good. In the sense that big black-box AIs having huge effects that we don't understand is obviously bad. There are obviously massive downsides to that. 

But also, I'm kind of like, if you look at the economy, it's pretty legible. It's made of a bunch of parts, like businesses, and you can track all the transactions… the economy is pretty transparent to us, and we still don't really know how to think about how to structure it in such a way that it isn't Moloch-y. And I feel like even if we could see all the internals of an AI, we would end up bumping into a similar kind of issue. 

And also I think that the Moloch-iness has to do with the fact that people in their psychologies are Moloch-y. 

Divia: Okay, can I bookmark this? I feel like this is gonna be a too-long conversation because we only have a few more minutes, but I have some beef with the term Moloch. I think there's obviously something that it's talking about that's real and important, and I hear it, and I'm like, I can't handle that concept. So anyway, bookmark. 

Though in general, I'm pretty on board with what you're saying about alignment of complex systems in general, being more potentially the thing to think about. Especially if I sort of take it as a given that humans will still be in the mix with the AI systems, and that the combination will outperform the AI systems in general for a long time. 

Alex: Yeah. Another analogy I use: sometimes you want to prove that a particular proposition is true for [a particular] number, like… 556978. It's easier to just prove it for all numbers, than to prove it for that particular one. And if you just try to focus on that particular number, it's a red herring. And it's feeling to me more and more like that's what the deal is with AI alignment, in relation to complex systems alignment in general. 

Divia: Got it. 

[1:45:16] Closing

Divia: Well, thank you so much for coming on the podcast. You've definitely given me a lot to think about. Even though I've already talked to you about all these things before, I have more to chew on. And I think I'm gonna maybe remind you that you said you might be willing to do this again sometime. 

Alex: Mhmm! 

Divia: But, yeah! Where can people find you if they want to follow up on any of this? 

Alex: I am on Twitter, and I check my DMs there sometimes. 

Divia: Okay. Well, so that's it. Do you want to say what your handle is?

Alex: @zhukeepa. 

Divia: Cool. We’ll link that in the show notes also. All right. Thanks again! 

Subscribe for upcoming pieces about related topics

1

It should go without saying that memetic selection pressures led the original meme to degrade into something optimized for replicability rather than truthfulness – hence, the epistemic distastefulness of much of modern Christianity.

2

My friend wishes to remain anonymous, but I offered to make a shout-out to academic history and sociology of science and philosophy on his behalf: he recommends Steven Shapin’s "A Social History of Truth", Frances Yates’s "The Rosicrucian Enlightenment", and Randall Collins’s "The Sociology of Philosophies".

3

On p. 294 of Quantum Metamechanics, Langan critiques the pilot wave interpretation for falsely objectivizing particles as existing in some reality independent of any observer.

4

Chris is alleged to have many beliefs, e.g. by Wikipedia, that lead people to dismiss him outright, that are significant distortions of his actual beliefs. For example, he is alleged to oppose interracial marriage, but he has family members in interracial marriages that he supports. He is also alleged to think 9/11 was an inside job to distract the public from the CTMU, but he has categorically denied that he believes this in a private conversation.

5

Scott Alexander makes a similar decision-theoretic argument for acting from behind the veil of ignorance in The Hour I First Believed

6

From p. 191 of Physics and the Ultimate Significance of Time: "That is to say, we no longer suppose that space-time is primarily an arena and that the laws describe necessary relationships in the development of events as they succeed each other in this arena. Rather, each law is a structure that interpenetrates and pervades the totality of the implicate order. To formulate such a law is more like painting a "whole picture" than it is like trying to find a set of dynamical equations for determining how one event follows another. Such dynamical equations will appear only as approximations and limiting cases valid in explicate contexts." (I think I may have hallucinated the bit about brushstrokes.)

7

I think the epistemic status of these hypnotherapy reports is basically analogous to an LLM hallucination… which I think is still more informative than pure noise.

8

I am not still not 100% sold on this datapoint; I would need to encounter more datapoints of this sort, ideally through direct personal experience, and rule out alternative explanations more meticulously to feel more convinced. I also think many of you probably shouldn’t update very much on hearing this datapoint from me, because you don’t have the information that I have (e.g. around the implausibility of my friend just making stuff up) that leads me to take this datapoint as seriously as I do.

9

Lightly edited excerpt from video, about extreme robustness on the cellular level: "Imagine a newt [...] it’s got these little tubules that go to the kidneys [...] take a cross-section of that tube, you see 8-10 cells that have cooperated to make this little tube cross-section [...] one amazing thing you can do is you can mess with the very early cell division to make the cells gigantic [...] if you make the cells different sizes, the whole newt is still the same size [...] if you take a cross-section through that tubule, instead of 8-10 cells, you might have 4 or 5 [...] until you make the cells so enormous that one single cell wraps around itself, and gives you the same large-scale structure by a completely different molecular mechanism. So now, instead of cell-to-cell communication to make a tubule [...] it’s one cell using the cytoskeleton to bend itself around. [...] In the service of a large-scale anatomical feature, different molecular mechanisms get called up."

1 Comment
Numinous Rationality
Numinous Rationality