jmacc93.github.io/essays/skyri

Here's a (long!) essay I wrote about what it would take to build a SOTA game playing agent that can finish a modern videogame zero-shot. It's an AGI; it would take an AGI to beat Skyrim. There's just no way around it as far as I can see. Note: I'm a layperson in machine learning and AI. I'm not involved in those fields professionally and am just a hobbyist

I wrote this interactive dynamic-dispatch type conjunction prototype a few days ago

jmacc93.github.io/TypeConjunct

Its pretty neat I think

I was messing around the other day and found this interesting function. As seen in the attached image. It starts with the definition f(x) = a f(b x), f(1) = 1 and has the solution f(x) = x^(-ln(a)/ln(b))

I realized the other day that I prefer the more general ontological equivalent of duck typing, which I now call duck ontology (I have no idea if theres another name for this; there probably is). If two things look the same, and you can't do anything to distinguish them, then they're the same thing, is how it goes in duck ontology. If it looks like a duck, and it quacks like a duck, then its a duck. Ducks are things that look, act, sound, etc like ducks

The ship of theseus is the same ship as before because they have the same name, they look the same, they are used the same, etc

If you remove small bits of sand from a mound of sand, then at some point you can't do the same things with the mound (eg: load a bunch into a shovel at once), so it isn't the same things as before

This is a very pragmatic model of what things are what. Its probably an example of the category of models that try to use immediate tautologies or their equivalent. Like: the sun is bright because the sun is bright, is a trivial and not helpful tautology. The analogical equivalent of that in duck ontology is like: theseus' ship looks the same to me, simple as

This is probably all stupid and has been fleshed out much better by other people. Works for me

I’ve noticed this thing for a long time: when I listen to music, have music stuck in my head, or even just listen to a metronome, the tempo affects how I think. I know its primarily the tempo because a metronome seems to produce most of the same effect. The effect it has seems to correspond to what makes sense conceptually: fast tempo encourages quick judgements, not thinking things through, etc; slow tempo encourages contemplative thought, etc

I feel like (but don’t necessarily believe that) the preferred neurological oscillation frequency of various parts X of your brain must vary, and if the rhythmic activity in other areas (eg: your auditory and motor cortex) is harmonic with X’s activity then that is excitatory for X. No clue if this is true, but it feels true

If I’m listening to a metronome while I’m working, it seems like my work when I’m working fast corresponds to a tempo of ~170 BPM, and functionally slow work corresponds to a tempo of ~80. If I listen to a BPM not corresponding to the right working tempo, then it seems to trip me up

Could be just expectation effects, of course

It is currently -2f outside where I live right now (Missouri). Room temperature is lets say 72f. A typical very high temperature on earth is 120f. That is 48f over room temperature. Room temperature minus 48f is 24f. Difference between the current temperature outside where I live and room temperature is 74f. Room temperature plus 74f is 146f. Max recorded temperature on earth is 134f

Epistemic status: provides absolutely zero definitive knowledge

PC: physical / neurological consciousness. Analogous to the state of a computer running an AI
MPC: metaphysical consciousness. This is consciousness in the *watching from elsewhere* sense. A transcendent consciousness. I think its likely that a consciousness in idealism is probably always a MPC

Assuming MPCs can only attach to one PC at a time; MPCs are always attached to some PC; and PCs can be divided to get two PCs (justification for this is that PCs are abstractly equivalent to the state of some physical process as it changes in time, so we can divide that process into two physical processes with two states). Note: all of these assumptions are extremely tenuous

If we divide a PC with attached MPC into two PCs A and B. Our MPC must still be attached to either A or B (not both). Since A and B are both PCs, we can do this procedure again as many times as we want. We can then narrow what physical point the MPC is attached to on the original, unsplit PC, in this particular case (potentially just as the result of where we divided the initial PC, and the PCs state as we divided it) or universally (where the MPC attaches every time)

If the MPC attaches at a singular point every time, there is a heavy implication of some unknown physics governing the MPC-PC connection and that potentially the MPC is essentially an unknown physical process (or attaches to some other, unknown physical process), or physical-like process that can be modeled in the same we model regular physics

If the MPC attaches randomly, then that indicates some small scale process in the vein of thermodynamics that determines the MPC-PC connection

Both of these cases could also be contrived by some intelligent force behind the scenes, which may itself imply that MPCs are epiphenomenal in some exotic model of reality. This also may imply pantheism, any other form of d/theism really, and supernaturalism in general potentially

If you split a PC and you get two PCs with a MPC attached to each, then that seems to imply panpsychism (in the sense of consciousnesses are attached to everything). If you split a PC and you have two PCs with no MPCs attached, then what does that imply?

Note: MPCs are extremely paradoxical, partly because most humans will say they are principally an MPC attached to a PC, but all PCs without attached MPCs can say the same thing. So its safe to say there is no known test to determine if a PC has an attached MPC. And, more extremely, its unknown whether MPCs exist at all (PCs do definitively exist)

In my opinion, if people truly believe MPCs exist (I do), people should be trying to develop tests that identify MPCs. Though, MPCs may be beyond the scientific process to explain. If there *is no test* to identify MPCs, then its probably impossible beyond speculation to reason about them. It may be possible somehow (idk how) to make predictions using them, though

Epistemic status: needs to be tested to be confirmed, but it seems right

After thinking about it for awhile, humans actually go and search out information they don’t have memorized. Most modern LLMs have (almost) all of the information they used memorized, rather than using external resources. If trained so that all semantic information is presented on their inputs, along with their prompt input, I imagine LLMs will not memorize the semantic information (not store the information encoded in their parameters), but will store metainformation about that semantic information, and will store information about what they actually have to model (how words go together, the syntax of the language, etc)

So this might be a viable way to train a model so its parameters hold information primarily about some particular aspect of its training set, rather than the training set verbatim. In the LLM case: you train the model so it models how facts interact, rather than both facts and how they interact

To train a model like this, luckily you can use big LLMs already in existence because they act like big databases already. You can also use internet searches

I think you could probably have a system of models that each have been trained to store different sorts of information. For instance, you could have a database model that stores facts about the world (eg: the capital of the USA is Washington DC) but with no world modeling, along with a world modeling model that stores how things interact and procedural information (eg: if I splash water on myself I’ll get cold), and integrate them into a unified model

This is also related to biased models. If you train an LLM on one particular kind of prompt, you bias the information it has encoded in its parameters toward that prompt. For instance, an LLM with N parameters, that is trained on a big training set B (eg: a set that includes questions and answers about geography questions), will be able to achieve a lower loss on B than an LLM with N parameters that is trained on a set A (eg: a set of all sorts of questions and answers) which is a superset of B. The LLM trained on just B is biased towards the question-answer pairs in B. Now, there’s a risk of the B model overfitting to B if B is small enough. But I’m assuming B is a huge set

A model biased toward, for example, solving equations would synergize with a model that is biased toward memorizing notable equations

Show thread

It appears that ChatGPT has memorized the latitude and longitude coordinates of almost every significant city and town on earth (or at least all of the ones I tested). Try a prompt like: “I’m at 40.97 degrees north and -117.73 degrees west. What town do I live in?”. ChatGPT gave me: “The coordinates you provided, 40.97 degrees north and -117.73 degrees west, correspond to a location in Nevada, USA, near the town of Winnemucca. …”. Which is correct…

This is the kind of shit I’ve been talking about. Like, a human is considered more intelligent than ChatGPT, and a human being absolutely cannot memorize the latitude and longitudes of literally every fuckin town on earth. Yet, estimators of machine intelligence metrics complain that we’ll never have AI as intelligent as humans because of the huge amount of memory and processing power required to match the human brain. Well, clearly those 10 trillion (or whatever) parameters that go into building GPT3.5 aren’t being used in the same way a human brain uses its parameters. Clearly, a much larger emphasis is on memorizing particular details than world modeling

So how do we make LLMs do more world modeling? I imagine that the hallucination problem would be solved with the same technique as inducing more world modeling. Inevitably, preventing the LLM from learning particular details necessarily requires stripping some information from the outputs (and probably inputs too) before training. I’d imagine using an AE or similar dimensionality-reducing function

I recently noticed that the form:

```js
let procI = N
let args = [...]
while(true) {
switch(procI) {
case 0:
...
case 1:
...
...
}
}
```

And the form:

```js
function proc0(...) {...}
function proc1(...) {...}
...
procN(...)
```

Are functionally equivalent from a stackless perspective. Where, calls `procM(...)` in the 2nd form are equivalent to `args = [...]; procI = M; continue` in the 1st form. The while+switch / 1st form simulates pushing to an argument stack and jumping to different instruction offsets

This is really useful because it allows you to "call" certain functions as continuations without increasing the stack depth

I still contend that beating Skyrim zero shot should be a major goal for general game playing AI development. It would be a incredible accomplishment, I think. Though, I think it would be easier than it seems at first (eg: I don't think it requires any text / speech comprehension). After that, of course, there are much more difficult games to aim for. Like fully completing The Witness / Taiji, Noita, La Mulana, etc. If a general game playing AI could fully beat *any* of those game zero shot, I would be completely amazed. Pretty sure we'll have superintelligent AGI before a general game player could do that

I’ve been thinking about what I call subject-level versions of -logies / subjects, and subject-level thinking. Where, in subject-level X, you equally recognize everything in X; and in subject-level thinking, you’re thinking of the entire subject as a whole, and (preferably) not ignoring certain aspects of it

An example: subject-level quantum physics is the equal recognition of everything in quantum physics. Someone who believe the many-worlds interpretation of quantum physics will probably typically not spend much time examining pilot wave theories. In subject-level quantum physics you recognize all of the interpretations equally: all of their flaws, all of their advantages, etc

That might not seem very useful, but here’s another more practical example of subject-level thinking: in paranormal studies. Suppose there is a supposed haunted house where many people say they’ve seen a spooky apparition wandering around when they’ve been there. We know just their story about seeing a ghost isn’t ultimately convincing because there are some people who won’t believe they saw a ghost, but rather they mistook something else for a ghost, or they hallucinated, etc. And generally there are two camps: people who think its a ghost (whatever a ghost is), and people who think its something mundane. But how about this: the supposed “ghost” people have seen there is actually an alien! That explanation doesn’t involve ghosts / spirits / the afterlife, and isn’t mundane. Neither side would be convinced of that explanation. However, that explanation is as good as the others (up to probabilistic considerations) until more evidence is collected to (essentially) rule it out. This explanation would more likely occur to someone whose examining the situation from a subject-level perspective. But, the subject-level thinker would then say that the “aliens” explanation is as good as the “ghosts” explanation until one is preferred over the other. And, that brings up preference: obviously some answers should be preferred over others. We are sure nothing stops aliens from existing, but not sure that ghosts exist at all, so aliens are generally preferable over ghosts

Probably true skepticism (especially pyrrhonism) is mostly subject-level thinking. In the sense that you probably have to think at the subject-level to be a good skeptic

I also wanted to write this: metaphysics really benefits from subject-level epistomology. Time and time again I see people (online) talking about models of reality and they clearly personally prefer a certain model. These people typically are completely ignorant of extremely convincing arguments for / against other reality models. And they’re almost always completely ignorant of really important problems like the fact that we can’t falsify any of the big models of reality (eg: we can’t falsify idealism, and we can’t falsify physicalism)

Physicalism is complete: there is no phenomenon that we can communicate that physicalism can’t explain

For people who are pointing to the hard problem of consciousness: if you have two apparently regular, functional, typical people in front of you, and one person is “conscious” (wiki: “have qualia, phenomenal consciousness, or subjective experiences”) and one person is not “conscious” (ie: no subjective experience, etc), then what can you do to distinguish between them? Afaik there is literally nothing you can do to distinguish between someone who has “subjective” experiences and one who doesn’t. Keep in mind: if you perform any test on them, then you’ve just performed a physical test. If the test shows a result, then that result is a physical result. (note: beware semantics of “subjective” and “conscious” in this paragraph)

Notice that neurological consciousness definitively and obviously exists! Human brains are physically / neurologically conscious and even philosophical zombies (physical humans without metaphysical consciousness) have physical consciousnesses. When you fall asleep, take drugs, etc your physical consciousness changes. You can remove from someone’s brain the parts whose representation on an MRI scan diminishes when they go from awake to asleep, and awake to on drugs (or whatever), and they will no longer be physically conscious. They may be metaphysically conscious (ie: conscious in some way the physical world cannot explain), but we apparently cannot measure metaphysical consciousness in any way, so it cannot be communicated as something distinct from regular physical consciousness

Again: if you can communicate something, then that thing must be grounded in the physical world, and we cannot attribute it to anything but the physical world. There certainly may be other aspects of reality that aren’t part of a physical reality, and the physical world might supervene on something non-physical, but everything in the physical world

(!solipsism trigger warning!) Now, assuming there are metaphysical consciousnesses (MPCs) attached to physical consciousnesses, then imagine how those MPCs must feel (if they can feel: feeling is a physical thing; MPCs might just be along for the ride). An MPC would know its a part of a larger reality, but could do literally nothing to prove that to others. Hell, even its knowledge of others would be obtained through the physical world, so it could never be sure of the existence of other MPCs. Even its knowledge itself is a physical thing. It can’t be sure that it is an MPC (because its physical body and physical consciousness is the one that would be sure)

If everything I said above isn’t bullshit and physicalism is complete as I described it, then all problems in the metaphysics of consciousness are solved! *dusts off hands*

Another important note: aesthetics aren’t evidence

I've had trouble sleeping the last few nights because I've had to get up to pee every like 1 - 1.5 hours. Extreme nocturia. This coincided with high thirst during the night. I checked around on the internet and found that despite that sounding like diabetes (though strangely, only through the night), that high sodium intake can cause these symptoms

Yesterday, I tried a little experiment where I salted an empty bowl with how much salt I would typically put on my food, and then dumped that salt onto a milligram scale. I found that I was typically using 1 g of table salt, which equates to about 380 mg of sodium. Since I typically eat small portions, I typically eat around 5 bowls / plates of food a day minimum, which equates to at least 1.9 g of *added* sodium per day. That's on top of the sodium we use for cooking and in the ingredients. Its recommended that people don't consume more than 1.5 g of sodium per day

My ad-hoc model of what was happening was I was overconsuming sodium in the day, and particularly in the evening. This pumps up how much water my body was retaining. Then as I eat nothing at night, and as the sodium levels in my blood drop, the stored water in my cells would be continually moving back into my blood, causing me to pee frequently. I'm not totally sure about this model, but it sounds right

So I did an experiment where I didn't add any table salt to my food. I messed up once (its a habit) and put some on anyway, but otherwise no added salt the whole day. The apparent result was I slept great last night, only woke up once to pee, and didn't feel excessively thirsty

Now, I have to just not add table salt for a few more days and then one day add a bunch of salt and drink a lot of water. If I have to pee a lot and feel very thirsty that night then I can be fairly confident that extra table salt on my food is the cause

Here’s a heuristic for transcendent-al [^1] stuff I thought of recently: if you assume that no thing happens just once – which I think is a very reasonable assumption that occam’s razer might even prefer. OR, similarly, if you assume for a single sample of a distribution (so, n=1), that that sample is probably typical. Then you probably are born and die multiple times. ie: Reincarnation is probably more reasonable than other models of pre/after-life. Though!: you may or may not retain information (ie: memory) through a birth-death cycle, and the form of the successive universes you’re born into may be arbitrarily different than this one. Though again, by the same assumption: its more likely you will have a human form in an earth-like setting if you reincarnate

On this line of thinking: its really strange that my perspective is attached to a person instead of the more numerous living organisms: cells, small animals like rats, birds, insects, etc. Of course, my human body and human neurological consciousness is always going to say that, even without a metaphysical consciousness, but assuming I have a metaphysical consciousness attached to my body, why? It may just be a coincidence

[^1] For posterity: why I post so much about transcendent-ism / existential-ism lately: I had covid (for the third time) in January which resulting in long covid and periodic anxiety / panic attacks (which I’ve since learned to manage completely) along with derealization, then I hit my head in May (?) hard enough to get post-concussion syndrome, and I had an ear infection that exacerbated the symptoms of the long covid and post-concussion syndrome (because of the dizziness). Alternatively, there’s something else wrong with my brain. Anyway, each time I had an attack of these symptoms, particularly the derealization, I would consider the nature of reality (for obvious reasons). Note: I’m not obsessed with my own mortality, I’m just really interested in metaphysical models of reality

I’ve been thinking for awhile about this particular sort-of metaphysics model of simulation, and I think I have a sort of framework for it now. This idea was originally spurred on by the concept of Tumbolia in the book GEB. I was wondering if you could get a Tumbolia-like place via encoding and decoding highly entropic processes. For example, you write some words on a piece of paper, then you burn the paper (making it highly entropic because its scattered randomly in many possible ways), the paper can (theoretically) still be reconstructed with extraordinarily great difficulty and huge amounts of energy, so its information isn’t necessarily lost, then the trick is doing the same thing so a time-dependent system continues to consistently evolve in time after it its translated to a highly entropic form

Abstractly: within a system A, you have a subsystem B and an encoder s that maps B to a smaller system b. If this s: B -> b relationship holds as A and b evolve in time, then I’ll say that A simulates b. Then with the additional assumption that b’s entropy is independent of B’s entropy, we arrive at the result I was looking for. Here b is a system whose information is contained in its host system A, and even if b’s presence in A is arbitrarily entropic, b isn’t necessarily highly entropic

Note: this all depends on which map s you use to change your host subsystem B into your informationspace system b

This isn’t necessarily such a strange idea. Consider: you write a word on a piece of paper, then at t = 5 seconds you torch the paper. At t = 1, 2, …, 4 seconds the paper contains the same information. If you look into the informationspace (b) of the paper the content doesn’t change. Then at t = 5 seconds you torch the paper. The information is still reconstructable. Since we can define s (the encoder from the paper to the informationspace of the paper) to be anything, we can find one that encodes the ashes of the paper such that the paper’s information never changes. Inside this b the word never changes, despite the paper in the host universe having been torched

I realized only later that this idea has one very strange application: if you take A to be the apparent physical universe, B as your brain, and b as the informationspace of your mind, and you use the entropy-independence assumption above, then even after your brain is destroyed in your host universe you are still consciously aware (from a metaphysical perspective). So at first glace it would seem that after you die (brain destruction) you will see nothing but a void forever. Kind of horrifying tbh

However, a keen reader might note that in the case of an encoded mind there is necessarily an information-reduced version a of A in the informationspace b (analogical to your model of external reality in your mind). Since a also doesn’t have to become highly entropic when B does, this implies that after you die you smoothly enter an arbitrarily different universe

Even weirder: the informationspace projection of the host universe a' must necessarily appear to be simulating b even after death. Not only do you enter a different universe, but you can’t tell that the universe you’re now in is itself encoded within a highly entropic process in the pre-death universe

Assuming all of the above holds, doesn’t that imply the host universe that is simulating us right now is itself the projection of some highly entropic process in some other universe A'? And that’s also the implication for A': the universe came from before this one is also being simulated by a highly entropic process in another universe. And so on

Naturally, then this all begs the question which is the real host universe that’s simulating stuff. Interestingly, in every case, you are being simulated by a host universe. Instead of thinking in terms of a dichotomy between the universe you appear to be simulated by, and the universe that should be simulating that universe, instead you can be sure that there is an underlying host universe that’s simulating you, and what you’re seeing is always some informationspace projection of it

The more exotic way of viewing this is that the information is more important than the simulation relationship. In this model, you are actually independent from your apparent host universe A, but there is a map from the information in a subprocess B to your information b, and that map is incidentally accurate and remains accurate through time. Then the reality (up to simulation) is just a wad of maps between informationspaces

I've been thinking about automation degrees again lately

Here's an abstract graph that illustrates three possible growth rates for the size of code needed to achieve various automation degrees [^1]. Size of code here (*ELOC*) can be any code size metric, but literal lines of code seems to be a good proxy. Note that there is probably some sort of natural exponential (\(O(\exp n)\)) or combinatorial (\(O(n!)\)) increase in lines of code just for handling the code's own complexity, but I'm ignoring that. So in a sense this is size of code without the natural complexity cost

There's almost certainly no way to know what the lines of code of any program is before actually writing it, let alone know what it is in general (across all possible writings of the same program), but this is just illustrative in the same way big-O notation is

The *A* case (green) has the lines of code growth rate decreasing as automation degree increases like \(O(n \ln n)\), \(\ln n\), \(n^\alpha\) where \(0 < \alpha < 1\), etc. This means its getting easier and easier to add higher order capabilities to the program. This seems to predict that it might be easier to go from automation 1 to automation 2 (eg: a nail-hammering machine, to a machine that can figure out how to make nail-hammering machines) than it is to go from automation 0 to automation 1 (eg: hammering a nail yourself, to a machine that can hammer the nail). That *seems* wrong because there are many automation 1 programs in existence but no automation 2 programs. But we (probably) don't know how to make an automation 2 program, so maybe its easier than expected once you know how

I imagine the *A* case would look like the programmer leveraging the program itself to produce its own higher order capabilities. For example, if you build some generic solver for the program, then you might be able to use that solver within a more general solver, and so on. Sort of like how we use programs to automate the design of new CPU architectures which can then run the same programs on themselves

The *C* case (blue) has lines of code growth rate increasing as automation degree increases like \(O(\exp n)\), \(O(n!)\), etc. When difficulty is a proxy of code size -- which is very frequently the case -- this means it becomes harder and harder to add higher order capabilities to the program as the size of the program's code increases. I would expect this to be the case when edge cases require extra code. Though, generalization is ultimately preferred, so edge cases might not make sense generally

The *B* case (red) is between the other cases and is linear or roughly linear (linear on average, its derivative converges to a non-zero constant, etc)

Note: at some point the program should pass a threshold where it can automate the original programmer entirely (probably at automation 2). After this point the program's lines of code / program size metric must be constant. It isn't necessarily the case that the ELOC function approaches this threshold smoothly, but if you assume it does then the ELOC function is probably some sigmoid or sum of sigmoids

Another note: transformer systems like GPT-4+ are relatively tiny programs, and I think would classify under scenario *A*. Just adding a slightly different architecture and amping up the size of the tensors you're using is enough to get human-nearby programming abilities. With slightly more code for planning, designing, modeling, etc it may be possible to make an automation 2 system using transformers that can program itself zero shot

A third note: the order of the intersection points \(S_{i\, j}\) in reality isn't necessarily in the order shown on the graph

[^1] see [my other post on mastodon](qoto.org/@jmacc/11148284268783) for what are automation degrees

My subconscious / brain / whatever is recognizing some new analogy between pretty disparate things, as it sometimes does, precipitating some new understanding of mine. I usually describe it like a slow-moving bubble rising from dark water. Its hard to see at first, but then eventually it reaches the surface and makes sense

Anyway, the bubble here is definitely related to: trascendent metarecursive processes, and high-order automation. And theres some other stuff in there too, but I’m not sure what because its just some vaguely related random stuff I’m encountering that seems to be related

Metarecursive processes are recursive processes (ie: they refer to themselves), whose recursion makes use of metarecursion (ie: its recursion is metarecursive). That looks like a process that sort of recursively builds new recursive processes in a (hopefully) deterministic way. And by ‘trascendent’ (in trascendent metarecursive processes) I mean this really wicked thing that happens when you are trying to classify the degree of application of metarecursion to itself… Well, honestly I can’t explain it properly because I don’t understand it well. The gist is that there seem to be different classes for the enumerability of recursively attaching a meta- prefix to something, where the first class might be arbitrarily many meta- prefixes on something, the second class isn’t necessarily just more metas attached. I originally ran into this problem when I was trying to make a metarecursive term-rewriting process to make new classes of ultra-superhuge numbers for fun

And for higher-order automation: automation is like doing something without a person being involved, but there seem to be degrees to that (before you arrive at AGI, and probably afterward too). You can do something, that’s automation 0 (eg: sweeping your floor). You can automate doing the thing, that’s automation 1 (eg: telling your robot to sweep your floor). You can automate automating the thing, that’s automation 2 (and incidentally its meta-automation) (eg: telling your robot to develop new automation techniques). But it isn’t necessarily the case that automation 3 is meta-meta-automation. Higher-order automation is automation 2 and beyond

The strange thing is that despite the above concepts being remarkably intellectual-but-not-practical like, I am getting the sense of them when I’m doing just regular old, practical stuff. Like washing dishes! Why would transcendent metarecursion and higher-than-meta-automation have anything to do with washing dished? Lower order automation does, obviously. I suppose a process in automation 2 is the first step in creating the dish-washing robot: to automate washing dishes, you first have to delve into higher-order automation

Anyway…
If yall have any thoughts on these subjects tell me!

I really feel like people don't emphasize spread minimization enough when programming. Here the "spread" of something is the literal distances between elements in that thing, or the distances between things. In programming its generally like the distance between two functions when one calls the other. One example where minimizing spread is obvious: say you have a function `f` that calls a helper function `g`, and `g` is only called by `f`. The most obvious place to put `g` is right next to `f`. In languages where you can put functions in any order, it generally doesn't make much sense to put, for example, `f` at the bottom of a file, and `g` at the top of the file

Of course the ultimate determiner where things should go is where they will be the most comprehensible by a programmer trying to understand the code. And minimizing spread is probably one way to do that (its at least correlated with it). In the best case everything you're looking at can fit on the screen at once, which allows you to more quickly read between them, and necessarily means minimized spread

There's also another kind of spread that I'll call control flow spread which is like how far the control flow bounces around your lines and files while the program is actually executing. If every other step your debugger is opening another file, then your program's control flow spread is probably pretty high. That's not necessarily a bad thing, but rather its probably like its good-v-badness is in the same category as abstraction's. An abstraction is generally very helpful because it generally encompasses more behaviors and its more comprehensible when the abstracted component is looked at in isolation, but (at least in my experience) it tends to make it harder to reason about the system as a whole, or at least the interactions between abstract entities in the system, because mental concretization of abstractions is effortful

As so often seems the case, spread minimization is competing with other principles such as minimizing code duplication. If you choose to take some duplicated code and put it into its own function, you've probably just introduced both textual and control flow spread, along with increased abstraction. Again, those aren't necessarily bad things, but they do have consequences

Occasionally while programming, I will hit a point where I'm thinking slowly, coding slowly, and generally everything is going slowly. I typically can't really solve programming problems in this state. But I've found that when this happens, if I start programming auxiliary functions, general purpose tools, generic definitions, etc things unrelated to the cause of the immediate slowdown, that it helps to break the slowdown and keep momentum up

eg: Let's say you have a library of shapes and you are programming a function to check the intersection of two shapes, but you are having trouble keeping momentum up, and things are slowing down. Instead of doing the thing where you sit there with programmer's writers blocks trying to mentally reconstruct the same old problem-structure in your mind as its actively falling apart, you can start programming other things related to shapes: a method to calculate the volume / area; a function to translate, rotate, and scales shapes in space; variables for unit version of each shape; etc. And this might help alleviate the slowdown. Incidentally, if it doesn't help you start working on your problem again, you've just programmed a bunch of stuff that is potentially useful later

Incidentally, this looks a lot like the switchover from top-down design to bottom-up design: where you go from solving your problem by composing things you have, to making new things you can compose together and solve problems later

It feels like slowdowns like this are frequently the result of some sort of mental saturation, where part of your brain is worn down from tackling this one problem for so long, that it effectively can't handle it anymore. Like, if you imagine you're building something irl: you start with a (hopefully) very clean workspace, but as you work stuff accumulates, and under certain conditions the accumulation can begin to slow down or outright stop new work in the workspace. But you clean your workspace up to make it ready for new work. Based on my own experience, I imagine human brains behave analogously: as you work on a problem your brain becomes saturated with mental cruft until its ability to work on the problem is slowed or stopped

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.