Universe simulations might naturally occur all over the place. By simulation I mean: when there is some physical process P in universe U with time evolution operator T and there is a map M from P to another universe u with time evolution operator t such that M P = u and M T P = t M P, ie M P' = u', then universe U is simulating universe u (tentative definition, but it seems to work) -- see the attached image
If time-evolution operator consistency like this is all that's necessary for a universe to be simulating another universe, then there might be arbitrarily complex universes embedded in some real-world physical processes. There could be entities living in things around us as encrypted information-containing processes
But, the maps between physical processes in our universe and these simulated universes might be incredibly complex and high-entropy (when taken together). The physical processes themselves might be incredibly complex and individually high-entropy, and they may be open systems (as long as there is time-operator consistency), so they might be spread through and around other processes
I think too there is maybe some more-abstract model here involving self-simulation and holographic self-encoding. The universe could be in essence simulating itself in some sense. If there are larger patterns of time-evolution operator consistencies in arbitrary maps between information-spaces in sections of the universe. In essence, the laws of physics themselves might be some image of a much more grand physics, and the universe we see is an image of a much more grand universe. This coming, of course, from me as I continue to look at quantum mechanics from an information-theoretical perspective, where this kind of stuff is natural because a classical universe a la MWI is one term in a decomposition of a universal superposition
Note: there may be other preferred operator consistency properties like energy conservation, momentum conservation, etc. But this general time-operator consistency puts no constraints on the physics of the simulated universe (other than that it changes in time)
Another note: ultimately whether a person thinks there is a simulated universe in their coffee mug or whatever implies that there is some closed cycle of maps which allows them to consistently extract coherent information from that universe, and if it quacks like a duck and so on it is a simulated universe
I also discovered that you can estimate the sinuosity using the width of the path (the maximum straight-line-orthogonal distance between points on the path). Roughly, you take the width of the path, multiply it by 1.6 and add 0.9. The attached image is the fit for that line
I wanted to see what various sinuosity values actually looked like, so I wrote a small mma script using filtered random walks to visualize it. In the attached image a path is shown in each grid and in the top-left corner of each is the percent you have to add to the straight-line distance to get the path distance. eg: +20% is a 1.2 times multiplier, ie: add 20% of the straight-line length to get the path length
Sinuosity is the length of a path divided by the shortest distance between the endpoints. Its how much you have to multiply the straight-line distance between two points to get the real distance. Very useful for estimating distances
Epistemic status: provides absolutely zero definitive knowledge
PC: physical / neurological consciousness. Analogous to the state of a computer running an AI
MPC: metaphysical consciousness. This is consciousness in the *watching from elsewhere* sense. A transcendent consciousness. I think its likely that a consciousness in idealism is probably always a MPC
Assuming MPCs can only attach to one PC at a time; MPCs are always attached to some PC; and PCs can be divided to get two PCs (justification for this is that PCs are abstractly equivalent to the state of some physical process as it changes in time, so we can divide that process into two physical processes with two states). Note: all of these assumptions are extremely tenuous
If we divide a PC with attached MPC into two PCs A and B. Our MPC must still be attached to either A or B (not both). Since A and B are both PCs, we can do this procedure again as many times as we want. We can then narrow what physical point the MPC is attached to on the original, unsplit PC, in this particular case (potentially just as the result of where we divided the initial PC, and the PCs state as we divided it) or universally (where the MPC attaches every time)
If the MPC attaches at a singular point every time, there is a heavy implication of some unknown physics governing the MPC-PC connection and that potentially the MPC is essentially an unknown physical process (or attaches to some other, unknown physical process), or physical-like process that can be modeled in the same we model regular physics
If the MPC attaches randomly, then that indicates some small scale process in the vein of thermodynamics that determines the MPC-PC connection
Both of these cases could also be contrived by some intelligent force behind the scenes, which may itself imply that MPCs are epiphenomenal in some exotic model of reality. This also may imply pantheism, any other form of d/theism really, and supernaturalism in general potentially
If you split a PC and you get two PCs with a MPC attached to each, then that seems to imply panpsychism (in the sense of consciousnesses are attached to everything). If you split a PC and you have two PCs with no MPCs attached, then what does that imply?
Note: MPCs are extremely paradoxical, partly because most humans will say they are principally an MPC attached to a PC, but all PCs without attached MPCs can say the same thing. So its safe to say there is no known test to determine if a PC has an attached MPC. And, more extremely, its unknown whether MPCs exist at all (PCs do definitively exist)
In my opinion, if people truly believe MPCs exist (I do), people should be trying to develop tests that identify MPCs. Though, MPCs may be beyond the scientific process to explain. If there *is no test* to identify MPCs, then its probably impossible beyond speculation to reason about them. It may be possible somehow (idk how) to make predictions using them, though
I've been thinking about what I call subject-level versions of -logies / subjects, and subject-level thinking. Where, in subject-level X, you equally recognize everything in X; and in subject-level thinking, you're thinking of the entire subject as a whole, and (preferably) not ignoring certain aspects of it
An example: *subject-level quantum physics* is the equal recognition of everything in quantum physics. Someone who believe the many-worlds interpretation of quantum physics will probably typically not spend much time examining pilot wave theories. In subject-level quantum physics you recognize all of the interpretations equally: all of their flaws, all of their advantages, etc
That might not seem very useful, but here's another more practical example of subject-level thinking: in paranormal studies. Suppose there is a supposed haunted house where many people say they've seen a spooky apparition wandering around when they've been there. We know just their story about seeing a ghost isn't *ultimately* convincing because there are some people who won't believe they saw a ghost, but rather they mistook something else for a ghost, or they hallucinated, etc. And generally there are two camps: people who think its a ghost (whatever a ghost is), and people who think its something mundane. But how about this: the supposed "ghost" people have seen there is actually an alien! That explanation doesn't involve ghosts / spirits / the afterlife, and isn't mundane. Neither side would be convinced of that explanation. However, that explanation is as good as the others (up to probabilistic considerations) until more evidence is collected to (essentially) rule it out. This explanation would more likely occur to someone whose examining the situation from a subject-level perspective. But, the subject-level thinker would then say that the "aliens" explanation is as good as the "ghosts" explanation until one is preferred over the other. And, that brings up preference: obviously some answers should be preferred over others. We are *sure* nothing stops aliens from existing, but not sure that ghosts exist at all, so aliens are generally preferable over ghosts
Probably true skepticism (especially pyrrhonism) is mostly subject-level thinking. In the sense that you probably have to think at the subject-level to be a good skeptic
I also wanted to write this: metaphysics *really* benefits from subject-level epistomology. Time and time again I see people (online) talking about models of reality and they clearly personally prefer a certain model. These people typically are completely ignorant of extremely convincing arguments for / against other reality models. And they're almost always completely ignorant of really important problems like the *fact* that we can't falsify *any* of the big models of reality (eg: we can't falsify idealism, and we can't falsify physicalism)
Physicalism is complete: there is no phenomenon that we can communicate that physicalism can't explain
For people who are pointing to the hard problem of consciousness: if you have two apparently regular, functional, typical people in front of you, and one person is "conscious" (wiki: "have qualia, phenomenal consciousness, or subjective experiences") and one person is not "conscious" (ie: no subjective experience, etc), then what can you do to distinguish between them? Afaik there is *literally* nothing you can do to distinguish between someone who has "subjective" experiences and one who doesn't. Keep in mind: if you perform any test on them, then you've just performed a physical test. If the test shows a result, then that result is a physical result. (note: beware semantics of "subjective" and "conscious" in this paragraph)
Notice that *neurological consciousness* definitively and obviously exists! Human brains are physically / neurologically conscious and even philosophical zombies (physical humans without metaphysical consciousness) have physical consciousnesses. When you fall asleep, take drugs, etc your physical consciousness changes. You can remove from someone's brain the parts whose representation on an MRI scan diminishes when they go from awake to asleep, and awake to on drugs (or whatever), and they will no longer be physically conscious. They *may* be metaphysically conscious (ie: conscious in some way the physical world cannot explain), but we apparently cannot measure metaphysical consciousness in any way, so it cannot be communicated as something distinct from regular physical consciousness
Again: if you can communicate something, then that thing must be grounded in the physical world, and we cannot attribute it to anything but the physical world. There certainly may be other aspects of reality that aren't part of a physical reality, and the physical world might supervene on something non-physical, but everything in the physical world
(!solipsism trigger warning!) Now, assuming there *are* metaphysical consciousnesses (MPCs) attached to physical consciousnesses, then imagine how those MPCs must feel (if they *can* feel: feeling is a physical thing; MPCs might just be along for the ride). An MPC would know its a part of a larger reality, but could do literally nothing to prove that to others. Hell, even its knowledge of others would be obtained through the physical world, so it could never be sure of the existence of other MPCs. Even its knowledge itself is a physical thing. It can't be sure that it is an MPC (because its physical body and physical consciousness is the one that would be sure)
If everything I said above isn't bullshit and physicalism is complete as I described it, then all problems in the metaphysics of consciousness are solved! \*dusts off hands\*
Another important note: aesthetics aren't evidence
Here's a heuristic for transcendent-al [^1] stuff I thought of recently: if you assume that no thing happens just once -- which I think is a very reasonable assumption that occam's razer might even prefer. OR, similarly, if you assume for a single sample of a distribution (so, n=1), that that sample is probably typical. Then you probably are born and die multiple times. ie: Reincarnation is probably more reasonable than other models of pre/after-life. Though!: you may or may not retain information (ie: memory) through a birth-death cycle, and the form of the successive universes you're born into may be arbitrarily different than this one. Though again, by the same assumption: its more likely you will have a human form in an earth-like setting if you reincarnate
On this line of thinking: its really strange that my perspective is attached to a person instead of the more numerous living organisms: cells, small animals like rats, birds, insects, etc. Of course, my human body and human neurological consciousness is always going to say that, even without a metaphysical consciousness, but assuming I have a metaphysical consciousness attached to my body, why? It may just be a coincidence
[^1] For posterity: why I post so much about transcendent-ism / existential-ism lately: I had covid (for the third time) in January which resulting in long covid and periodic anxiety / panic attacks (which I've since learned to manage completely) along with derealization, then I hit my head in May (?) hard enough to get post-concussion syndrome, and I had an ear infection that exacerbated the symptoms of the long covid and post-concussion syndrome (because of the dizziness). Alternatively, there's something else wrong with my brain. Anyway, each time I had an attack of these symptoms, particularly the derealization, I would consider the nature of reality (for obvious reasons). Note: I'm not obsessed with my own mortality, I'm just really interested in metaphysical models of reality
I've been thinking for awhile about this particular sort-of metaphysics model of simulation, and I think I have a sort of framework for it now. This idea was originally spurred on by the concept of Tumbolia in the book GEB. I was wondering if you could get a Tumbolia-like place via encoding and decoding highly entropic processes. For example, you write some words on a piece of paper, then you burn the paper (making it highly entropic because its scattered randomly in many possible ways), the paper can (theoretically) still be reconstructed with extraordinarily great difficulty and huge amounts of energy, so its information isn't necessarily lost, then the trick is doing the same thing so a time-dependent system continues to consistently evolve in time after it its translated to a highly entropic form
Abstractly: within a system `A`, you have a subsystem `B` and an encoder `s` that maps `B` to a smaller system `b`. If this `s: B -> b` relationship holds as `A` and `b` evolve in time, then I'll say that `A` simulates `b`. Then with the additional assumption that `b`'s entropy is independent of `B`'s entropy, we arrive at the result I was looking for. Here `b` is a system whose information is contained in its host system `A`, and even if `b`'s presence in `A` is arbitrarily entropic, `b` isn't necessarily highly entropic
Note: this all depends on which map `s` you use to change your host subsystem `B` into your informationspace system `b`
This isn't necessarily such a strange idea. Consider: you write a word on a piece of paper, then at t = 5 seconds you torch the paper. At t = 1, 2, ..., 4 seconds the paper contains the same information. If you look into the informationspace (`b`) of the paper the content doesn't change. Then at t = 5 seconds you torch the paper. The information is still reconstructable. Since we can define `s` (the encoder from the paper to the informationspace of the paper) to be anything, we can find one that encodes the ashes of the paper such that the paper's information never changes. Inside this `b` the word never changes, despite the paper in the host universe having been torched
I realized only later that this idea has one very strange application: if you take `A` to be the apparent physical universe, `B` as your brain, and `b` as the informationspace of your mind, and you use the entropy-independence assumption above, then even after your brain is destroyed in your host universe you are still consciously aware (from a metaphysical perspective). So at first glace it would seem that after you die (brain destruction) you will see nothing but a void forever. Kind of horrifying tbh
*However*, a keen reader might note that in the case of an encoded mind there is necessarily an information-reduced version `a` of `A` in the informationspace `b` (analogical to your model of external reality in your mind). Since `a` *also* doesn't have to become highly entropic when `B` does, this implies that after you die you *smoothly enter an arbitrarily different universe*
Even weirder: the informationspace projection of the host universe `a'` must necessarily appear to be simulating `b` even after death. Not only do you enter a different universe, but you can't tell that the universe you're now in is itself encoded within a highly entropic process in the pre-death universe
Assuming all of the above holds, doesn't that imply the host universe that is simulating us right now is itself the projection of some highly entropic process in some other universe `A'`? And that's also the implication for `A'`: the universe came from before this one is also being simulated by a highly entropic process in another universe. And so on
Naturally, then this all begs the question which is the *real* host universe that's simulating stuff. Interestingly, in every case, you are being simulated by *a* host universe. Instead of thinking in terms of a dichotomy between the universe you appear to be simulated by, and the universe that should be simulating that universe, instead you can be sure that there is an underlying host universe that's simulating you, and what you're seeing is always some informationspace projection of it
The more exotic way of viewing this is that the information is more important than the simulation relationship. In this model, you are actually independent from your apparent host universe `A`, but there is a map from the information in a subprocess `B` to your information `b`, and that map is incidentally accurate and remains accurate through time. Then the reality (up to simulation) is just a wad of maps between informationspaces
I've been thinking about automation degrees again lately
Here's an abstract graph that illustrates three possible growth rates for the size of code needed to achieve various automation degrees [^1]. Size of code here (*ELOC*) can be any code size metric, but literal lines of code seems to be a good proxy. Note that there is probably some sort of natural exponential (\(O(\exp n)\)) or combinatorial (\(O(n!)\)) increase in lines of code just for handling the code's own complexity, but I'm ignoring that. So in a sense this is size of code without the natural complexity cost
There's almost certainly no way to know what the lines of code of any program is before actually writing it, let alone know what it is in general (across all possible writings of the same program), but this is just illustrative in the same way big-O notation is
The *A* case (green) has the lines of code growth rate decreasing as automation degree increases like \(O(n \ln n)\), \(\ln n\), \(n^\alpha\) where \(0 < \alpha < 1\), etc. This means its getting easier and easier to add higher order capabilities to the program. This seems to predict that it might be easier to go from automation 1 to automation 2 (eg: a nail-hammering machine, to a machine that can figure out how to make nail-hammering machines) than it is to go from automation 0 to automation 1 (eg: hammering a nail yourself, to a machine that can hammer the nail). That *seems* wrong because there are many automation 1 programs in existence but no automation 2 programs. But we (probably) don't know how to make an automation 2 program, so maybe its easier than expected once you know how
I imagine the *A* case would look like the programmer leveraging the program itself to produce its own higher order capabilities. For example, if you build some generic solver for the program, then you might be able to use that solver within a more general solver, and so on. Sort of like how we use programs to automate the design of new CPU architectures which can then run the same programs on themselves
The *C* case (blue) has lines of code growth rate increasing as automation degree increases like \(O(\exp n)\), \(O(n!)\), etc. When difficulty is a proxy of code size -- which is very frequently the case -- this means it becomes harder and harder to add higher order capabilities to the program as the size of the program's code increases. I would expect this to be the case when edge cases require extra code. Though, generalization is ultimately preferred, so edge cases might not make sense generally
The *B* case (red) is between the other cases and is linear or roughly linear (linear on average, its derivative converges to a non-zero constant, etc)
Note: at some point the program should pass a threshold where it can automate the original programmer entirely (probably at automation 2). After this point the program's lines of code / program size metric must be constant. It isn't necessarily the case that the ELOC function approaches this threshold smoothly, but if you assume it does then the ELOC function is probably some sigmoid or sum of sigmoids
Another note: transformer systems like GPT-4+ are relatively tiny programs, and I think would classify under scenario *A*. Just adding a slightly different architecture and amping up the size of the tensors you're using is enough to get human-nearby programming abilities. With slightly more code for planning, designing, modeling, etc it may be possible to make an automation 2 system using transformers that can program itself zero shot
A third note: the order of the intersection points \(S_{i\, j}\) in reality isn't necessarily in the order shown on the graph
[^1] see [my other post on mastodon](https://qoto.org/@jmacc/111482842687834034) for what are automation degrees
My subconscious / brain / whatever is recognizing some new analogy between pretty disparate things, as it sometimes does, precipitating some new understanding of mine. I usually describe it like a slow-moving bubble rising from dark water. Its hard to see at first, but then eventually it reaches the surface and makes sense
Anyway, the bubble here is definitely related to: trascendent metarecursive processes, and high-order automation. And theres some other stuff in there too, but I'm not sure what because its just some vaguely related random stuff I'm encountering that seems to be related
Metarecursive processes are recursive processes (ie: they refer to themselves), whose recursion makes use of metarecursion (ie: its recursion is metarecursive). That looks like a process that sort of recursively builds new recursive processes in a (hopefully) deterministic way. And by 'trascendent' (in trascendent metarecursive processes) I mean this really wicked thing that happens when you are trying to classify the degree of application of metarecursion to itself... Well, honestly I can't explain it properly because I don't understand it well. The gist is that there seem to be different classes for the enumerability of recursively attaching a meta- prefix to something, where the first class might be arbitrarily many meta- prefixes on something, the second class isn't necessarily just more metas attached. I originally ran into this problem when I was trying to make a metarecursive term-rewriting process to make new classes of ultra-superhuge numbers for fun
And for higher-order automation: automation is like doing something without a person being involved, but there seem to be degrees to that (before you arrive at AGI, and probably afterward too). You can do something, that's automation 0 (eg: sweeping your floor). You can automate doing the thing, that's automation 1 (eg: telling your robot to sweep your floor). You can automate automating the thing, that's automation 2 (and incidentally its meta-automation) (eg: telling your robot to develop new automation techniques). But it isn't necessarily the case that automation 3 is meta-meta-automation. Higher-order automation is automation 2 and beyond
The strange thing is that despite the above concepts being remarkably intellectual-but-not-practical like, I am getting the sense of them when I'm doing just regular old, practical stuff. Like washing dishes! Why would transcendent metarecursion and higher-than-meta-automation have anything to do with washing dished? Lower order automation does, obviously. I suppose a process in automation 2 is the first step in creating the dish-washing robot: to automate washing dishes, you first have to delve into higher-order automation
Anyway...
If yall have any thoughts on these subjects tell me!
I really feel like people don't emphasize spread minimization enough when programming. Here the "spread" of something is the literal distances between elements in that thing, or the distances between things. In programming its generally like the distance between two functions when one calls the other. One example where minimizing spread is obvious: say you have a function `f` that calls a helper function `g`, and `g` is only called by `f`. The most obvious place to put `g` is right next to `f`. In languages where you can put functions in any order, it generally doesn't make much sense to put, for example, `f` at the bottom of a file, and `g` at the top of the file
Of course the ultimate determiner where things should go is where they will be the most comprehensible by a programmer trying to understand the code. And minimizing spread is probably one way to do that (its at least correlated with it). In the best case everything you're looking at can fit on the screen at once, which allows you to more quickly read between them, and necessarily means minimized spread
There's also another kind of spread that I'll call control flow spread which is like how far the control flow bounces around your lines and files while the program is actually executing. If every other step your debugger is opening another file, then your program's control flow spread is probably pretty high. That's not necessarily a bad thing, but rather its probably like its good-v-badness is in the same category as abstraction's. An abstraction is generally very helpful because it generally encompasses more behaviors and its more comprehensible when the abstracted component is looked at in isolation, but (at least in my experience) it tends to make it harder to reason about the system as a whole, or at least the interactions between abstract entities in the system, because mental concretization of abstractions is effortful
As so often seems the case, spread minimization is competing with other principles such as minimizing code duplication. If you choose to take some duplicated code and put it into its own function, you've probably just introduced both textual and control flow spread, along with increased abstraction. Again, those aren't necessarily bad things, but they do have consequences
Occasionally while programming, I will hit a point where I'm thinking slowly, coding slowly, and generally everything is going slowly. I typically can't really solve programming problems in this state. But I've found that when this happens, if I start programming auxiliary functions, general purpose tools, generic definitions, etc things unrelated to the cause of the immediate slowdown, that it helps to break the slowdown and keep momentum up
eg: Let's say you have a library of shapes and you are programming a function to check the intersection of two shapes, but you are having trouble keeping momentum up, and things are slowing down. Instead of doing the thing where you sit there with programmer's writers blocks trying to mentally reconstruct the same old problem-structure in your mind as its actively falling apart, you can start programming other things related to shapes: a method to calculate the volume / area; a function to translate, rotate, and scales shapes in space; variables for unit version of each shape; etc. And this might help alleviate the slowdown. Incidentally, if it doesn't help you start working on your problem again, you've just programmed a bunch of stuff that is potentially useful later
Incidentally, this looks a lot like the switchover from top-down design to bottom-up design: where you go from solving your problem by composing things you have, to making new things you can compose together and solve problems later
It feels like slowdowns like this are frequently the result of some sort of mental saturation, where part of your brain is worn down from tackling this one problem for so long, that it effectively can't handle it anymore. Like, if you imagine you're building something irl: you start with a (hopefully) very clean workspace, but as you work stuff accumulates, and under certain conditions the accumulation can begin to slow down or outright stop new work in the workspace. But you clean your workspace up to make it ready for new work. Based on my own experience, I imagine human brains behave analogously: as you work on a problem your brain becomes saturated with mental cruft until its ability to work on the problem is slowed or stopped
Even when you're not sitting down and working a problem, you can be working the problem while going about doing other things (if those things don't require your full attention) like this: take aspects of your environment, your actions, thoughts, etc and analogically morph them into your problem, then see how they behave. eg: You might be eating breakfast and see a picture of cows or something, and you start mentally linking the cows into a network to help you solve a graph-related problem. Another eg: you might be talking to someone about other people (thats gossip) and you imagine the relationships involved are a stack, and you're popping and pushing people while talking to help you solve a programming problem
I've been doing this while going about my morning routine today and it does seem very effective
I bet its best to try a variety of analogies, or as many analogies you can come up with, than just one. I mean, I bet variety is the key to this technique
I'm imagining this variant of chess that goes like this: there are literally NO rules. Literally *no* rules. You could just declare yourself the winner if you want, and the other player could declare themselves the winner too. You could retcon the rules so that only you can win. Etc
Why? Because that would be no fun. But, specifically, if you actually want to have fun playing it, then you can't just blindly optimize for your own success while playing it. It forces you to not goodhart winning the game. Or, rather, its extremely easy to see that you're goodharting winning and so if you end up playing a good game of this variant of chess you must not have goodharted it
I actually hate chess, but I thought of this while playing a different (video-) game. I was cheating. As is so often the case with blindly cheating: its not really that fun. But I was cheating in such a way to simulate the actual rules of the game being different. Sort of like the same kind of thing when you do a self-imposed challenge. I was very careful to notice what my impulse was, and what I thought fit the spirit of what I was doing, rather than what I wanted superficially
This is related to the following anti-bad-actor tactic: you simply give everyone every opportunity to be bad, and when the bad ones are bad you ban em, (virtually) arrest em, etc
Its also related to playing make believe. You could theoretically give yourself an arbitrarily advantageous position in make-believe space and win out over the people you're playing make-believe with, but the point of playing make-believe isn't to win like that
Its probably also related to life in general. You have a lot of freedom to do whatever you want. But when you reduce everything in your life to X thing, and lay into that one thing really hard, then in the end you find you never won, and wiser people might even say that you lost
Existential-ism
Here's more of Joe's unhinged existential philosophical ramblings at night:
Any time I encounter any idea about the nature of consciousness in reality, I just apply this thought-tool which, idk, I'll call the "material drone nondifferentiability principle" (MDNDP). Suppose you have an idea P about consciousness / souls / whatever that you think might be right. Imagine a purely physical, deterministic, machine-like universe that looks exactly identical to our universe but where P definitely doesn't apply to anything, and imagine a human equivalent creature D within this universe. Would you be surprised if this creature came up with P, thought it was true, and thought it applied to the creatures in its universe? If you wouldn't be surprised, then you probably agree that you can't use your thinking up P to differentiate whether you're in a universe where it does apply or doesn't apply
ie: Just thinking up P can't be used to differentiate the universe you're actually in
There's also a version involving a robot instead of an entire universe. Suppose you think you have a soul / consciousness / whatever. Now I build a robot that looks and acts exactly like a person, but is completely deterministic in its functioning. Everything it does and says has a specific, determinable cause electronically / mechanically / logically / whatever. Now it walks into the room and you tell it (thinking its a person, since its a perfect replica of one), that it has a soul / consciousness / whatever. But, disregarding models of souls / consciousnesses / etc that attach them to everything / information / etc, the robot probably doesn't match what you had in mind when you were talking about your immortal soul / innate consciousness / whatever
Here's an example of applying this thought-tool: suppose I imagine "I'm" an immortal perspective attached to my body. When "I" die my perspective will simply shift to a new body, etc. Applying the thought-tool: the material drone in the purely physical universe thinks the same thing! But they're also wrong. So I enigmatically can't differentiate whether I actually am or am not an immortal perspective just because I know of the idea
This presents a really strange situation. Imagine if I REALLY AM an immortal perspective attached to my body, or an ensouled body, or have a consciousness beyond just a neurological consciousness, or whatever. I can't differentiate whether "I" know whether I am more than just a physical body, and whether just my physical body knows its more than just a physical body (which it would ironically think even if it wasn't)
Notice that even in cases where people have had experiences (eg: psychedelic drugs, NDEs, etc) they use as evidence for their model of reality, when you apply the thought-tool its clear that a purely material drone might have the same experience (which is entirely simulated by their brains) and think the same thing
Unfortunately, essentially all metaphysical models that say anything about consciousness fall prey to this thought-tool and so can't be used for differentiation. Its conceivable that even physicalism (which has lmao a lot of evidence to support it) doesn't pass the tool: imagine a pure idealist consciousness in a idealist reality, imagine it simulates that it thinks physicalism is true and its simulated self really believes that
Of course, the even larger problem is: literally any reality could potentially be simulated by a higher reality. You can't "know" your model of reality is the ultimate one because the reality you find yourself in might be simulated. And, standard note on what I mean by simulated: I am it here in the epiphenomenal / supervenience sense, not in the The Matrix sense; a parent universe simulating your universe could be *beyond* *literally anything*
Take a positive integer `n` and factor it into a product of primes, then replace multiplication in that product with addition. eg: 6 = 2 * 3 becomes 5 = 2 + 3. Here's the plot of the number you get from that sum (vertical axis) for each of the originally factored numbers `n` (horizontal axis). Notice that many of these sums are equal to the original number. All primes are guaranteed to have their prime-sum equal to themselves because they are their own single prime factor
This procedure must have a name, but I don't know what it is