Show more

I’ve been lightly helping some of my family members get their new laptop started and running, and install some some programs on it. But, it runs windows 11 and wow I thought windows 10 was bad… Windows 11 is completely fucked. I don’t follow windows news at all and I was very surprised how terrible it is

I say now: I would rather give up videogames 100% and run stuff in VMs than run windows 11. I have one computer running windows 10, and everything else is linux. I just won’t ever use windows after this one goes, I guess

If you’re thinking about abandoning windows, I recommend Debian with Xfce DE

Even when you’re not sitting down and working a problem, you can be working the problem while going about doing other things (if those things don’t require your full attention) like this: take aspects of your environment, your actions, thoughts, etc and analogically morph them into your problem, then see how they behave. eg: You might be eating breakfast and see a picture of cows or something, and you start mentally linking the cows into a network to help you solve a graph-related problem. Another eg: you might be talking to someone about other people (thats gossip) and you imagine the relationships involved are a stack, and you’re popping and pushing people while talking to help you solve a programming problem

I’ve been doing this while going about my morning routine today and it does seem very effective

I bet its best to try a variety of analogies, or as many analogies you can come up with, than just one. I mean, I bet variety is the key to this technique

I'm imagining this variant of chess that goes like this: there are literally NO rules. Literally *no* rules. You could just declare yourself the winner if you want, and the other player could declare themselves the winner too. You could retcon the rules so that only you can win. Etc

Why? Because that would be no fun. But, specifically, if you actually want to have fun playing it, then you can't just blindly optimize for your own success while playing it. It forces you to not goodhart winning the game. Or, rather, its extremely easy to see that you're goodharting winning and so if you end up playing a good game of this variant of chess you must not have goodharted it

I actually hate chess, but I thought of this while playing a different (video-) game. I was cheating. As is so often the case with blindly cheating: its not really that fun. But I was cheating in such a way to simulate the actual rules of the game being different. Sort of like the same kind of thing when you do a self-imposed challenge. I was very careful to notice what my impulse was, and what I thought fit the spirit of what I was doing, rather than what I wanted superficially

This is related to the following anti-bad-actor tactic: you simply give everyone every opportunity to be bad, and when the bad ones are bad you ban em, (virtually) arrest em, etc

Its also related to playing make believe. You could theoretically give yourself an arbitrarily advantageous position in make-believe space and win out over the people you're playing make-believe with, but the point of playing make-believe isn't to win like that

Its probably also related to life in general. You have a lot of freedom to do whatever you want. But when you reduce everything in your life to X thing, and lay into that one thing really hard, then in the end you find you never won, and wiser people might even say that you lost

I’ve noticed that a source of a lot of bugs while I’m programming is stack-trace-decorrelated errors and generally location-decorrelated errors. I’ll explain:

When you make a syntax error, the parser catches it and tells you the exact line (and sometimes character number) the error is at, and you can immediately fix it. Here, the location of the detected error is equivalent to the location of the error itself. If you make an error that the runtime checks for (like setting an out of bounds index of an array when the runtime does bounds checking), then an exception is thrown and the stack trace says the line the exception was thrown at, and so you can immediately fix it. The location of the detected error is equivalent to the location of the error itself. You might also see situations like when you set a variable, then a few lines later an exception is thrown, in that case the detected error in the stack trace is correlated (in the informational sense) with the actual error location

But there are certain situations where the real error locations aren’t correlated with the detected error locations. Like, you incorrectly set an object’s property obj.x in function g called by function F, then g finishes and F then calls function h, where an exception is thrown because the error with obj.x is detected. The stack trace in this case will show F and h, but the actual error location in g won’t be shown on the trace. The detected error location is decorrelated with the actual error location. Note that they’re still correlated because they both are in F, but they’re less correlated than if the detected error is in g with the actual error

One way I’ve found to combat this is to check for the good form of some state immediately after state changes. Like: changeState(obj); checkIsInGoodForm(obj), where changeState could potentially produce a stack-trace-decorrelated error. If changeState does produce an error, then checkIsInGoodForm is likely to catch it. Here, checkIsInGoodForm could be anything from a single assertion (assert(obj.x > 0), assert(i < array.length), etc), to expensive, complex functions. This is also one of the advantages of using automatically applied invariants if your language / runtime supports them: they necessarily apply checkIsInGoodForm automatically after any corresponding state changes

If anyone has any other strategies, techniques, resources, etc for preventing and detecting stack-trace-decorrelated errors, please tell me!

I truly wonder at websites for programming languages which not only don't have examples of their language right front and center on their home page, but hide any example of their language so extremely deep within their website that you just give up trying to find any and go to wikipedia or rosetta code instead. And, no, a formal EBNF specification isn't an example of your language

Existential-ism 

Here’s more of Joe’s unhinged existential philosophical ramblings at night:
Any time I encounter any idea about the nature of consciousness in reality, I just apply this thought-tool which, idk, I’ll call the “material drone nondifferentiability principle” (MDNDP). Suppose you have an idea P about consciousness / souls / whatever that you think might be right. Imagine a purely physical, deterministic, machine-like universe that looks exactly identical to our universe but where P definitely doesn’t apply to anything, and imagine a human equivalent creature D within this universe. Would you be surprised if this creature came up with P, thought it was true, and thought it applied to the creatures in its universe? If you wouldn’t be surprised, then you probably agree that you can’t use your thinking up P to differentiate whether you’re in a universe where it does apply or doesn’t apply

ie: Just thinking up P can’t be used to differentiate the universe you’re actually in

There’s also a version involving a robot instead of an entire universe. Suppose you think you have a soul / consciousness / whatever. Now I build a robot that looks and acts exactly like a person, but is completely deterministic in its functioning. Everything it does and says has a specific, determinable cause electronically / mechanically / logically / whatever. Now it walks into the room and you tell it (thinking its a person, since its a perfect replica of one), that it has a soul / consciousness / whatever. But, disregarding models of souls / consciousnesses / etc that attach them to everything / information / etc, the robot probably doesn’t match what you had in mind when you were talking about your immortal soul / innate consciousness / whatever

Here’s an example of applying this thought-tool: suppose I imagine “I’m” an immortal perspective attached to my body. When “I” die my perspective will simply shift to a new body, etc. Applying the thought-tool: the material drone in the purely physical universe thinks the same thing! But they’re also wrong. So I enigmatically can’t differentiate whether I actually am or am not an immortal perspective just because I know of the idea

This presents a really strange situation. Imagine if I REALLY AM an immortal perspective attached to my body, or an ensouled body, or have a consciousness beyond just a neurological consciousness, or whatever. I can’t differentiate whether “I” know whether I am more than just a physical body, and whether just my physical body knows its more than just a physical body (which it would ironically think even if it wasn’t)

Notice that even in cases where people have had experiences (eg: psychedelic drugs, NDEs, etc) they use as evidence for their model of reality, when you apply the thought-tool its clear that a purely material drone might have the same experience (which is entirely simulated by their brains) and think the same thing

Unfortunately, essentially all metaphysical models that say anything about consciousness fall prey to this thought-tool and so can’t be used for differentiation. Its conceivable that even physicalism (which has lmao a lot of evidence to support it) doesn’t pass the tool: imagine a pure idealist consciousness in a idealist reality, imagine it simulates that it thinks physicalism is true and its simulated self really believes that

Of course, the even larger problem is: literally any reality could potentially be simulated by a higher reality. You can’t “know” your model of reality is the ultimate one because the reality you find yourself in might be simulated. And, standard note on what I mean by simulated: I am it here in the epiphenomenal / supervenience sense, not in the The Matrix sense; a parent universe simulating your universe could be beyond literally anything

Take a positive integer `n` and factor it into a product of primes, then replace multiplication in that product with addition. eg: 6 = 2 * 3 becomes 5 = 2 + 3. Here's the plot of the number you get from that sum (vertical axis) for each of the originally factored numbers `n` (horizontal axis). Notice that many of these sums are equal to the original number. All primes are guaranteed to have their prime-sum equal to themselves because they are their own single prime factor

This procedure must have a name, but I don't know what it is

Programming related: I feel like this would be helpful for people: I very commonly use the program gaze to automatically execute a script / program I’m working on when I save the file, and with automatically executed tests in the script I have instant feedback what has failed if I made a change that broke something

Heres an example command for a node script: gaze -r -c "node --test-reporter=dot {{file}}" filename_here.js

The -r switch forces the program to restart when you save. The -c "..." part executes whatever ... is in the quotes using your command shell

To be clear: the workflow with this is like: have a terminal with gaze visible on screen, then in your editor: make changes and ctrl-s to save, the script will run and its output will be immediately visible in the terminal. If you have tests in your script as well, you’ll see when one fails on save

Trigger warning: existential terror (Zeno's Time Capsule; don't open!) 

I’ve been thinking about this thought experiment I’ve been calling Zeno’s Time Capsule:

Zeno’s time capsule is a machine which can contain stuff and when it closes, its internal relative rate of time increases to, then decreases from, a finite time singularity at some time, say 30 minutes in, until finally opening at t = 1 hour from an external perspective

So, to be clear: you put something in the time capsule at t = 0, the capsule closes and the rate of time inside the capsule starts increasing. At t = 30 minutes the capsules internal rate of time is mathematically infinite, then its rate of time slows until it is a regular 1-1 with the rate of time outside the capsule at t = 1 hour, and then the capsule opens

The rate of time inside the capsule is how fast a clock inside is ticking relative to outside the capsule. So, a clock with a rate of time twice that of the external rate of time would be going twice as fast and 1 hour outside the capsule would feel like 30 minutes inside the capsule; and for every 30 minutes outside the capsule, inside the capsule would have gone 1 hour

So if you put a immortal, indestructible, perfect clock in the capsule and let the capsule go through its thing, the clock on the other side would have a mathematically infinite age. What would the clock say the time is, though? You could probably make a symmetry argument and say the time would be the correct time. But, I think, any answer wouldn’t be any more or less surprising than any other answer, so all answers might be considered equally correct. Its very convenient that our real physical reality apparently chooses outcomes randomly (or does all outcomes simultaneously itc of mwi). How nice of the universe for providing such neat guard rails to maintain timey singularities’ information firewalls

That’s cool, but say you’re immortal, and you entered the capsule… When the capsule opens after 1 hour, what do you remember? Assuming you haven’t gone completely insane

The 1 hour mark probably isn’t anything really special or unique itc. If you prematurely open the capsule arbitrarily close to, but before, the singularity point, and any time after the singularity point, the person inside will probably say something like: “I don’t remember much before a billion years ago, and I don’t remember anything before a hundred quadrillion years ago”. Even if you prematurely open the capsule arbitrarily close to, but after, the singularity you will still probably get the same answer (I imagine)

This is interesting because theres probably a way for any immortal person to comfortably ride through the time capsule: inside the time capsule is another machine (that’s indestructible, etc) that just resets the person’s memory when used. So every 1 week or whatever the machine resets the person’s memory, and so at no point does the person feel they’ve been in the capsule for over 1 week. This is Zeno’s Humane Time Capsule

Obviously even if you go for the full ride and do the capsule without the memory-reset machine, there isn’t really anything to remember from inside at the external t = 30 minute mark. Even if there was an “event” at that time, and you put aside some sort of special machine to remember just that one thing, if there’s any chance at all you will misclassify another event as the one event at t = 30 minutes, then its absolutely guaranteed you will have done so

And, even if you had a clock that gave the real outside equivalent time inside, disregarding exotic explanations of whats happening, you most likely would never see the t = 30 minute time. You would see every time arbitrarily close to the t = 30 minute time, include times where the machine literally cannot display how close you are to the t = 30 minute mark, and may even round the time to that point because of that, but you won’t see the exact t = 30 minute mark

I’ve been wanting for awhile to ideate a methodology for creating tasks and goals in the situation you have a goal and one task (which is usually like: ‘work towards the goal’) to end up with multiple tasks that can be done concurrently and so can be scheduled in alternating blocks of time. And I finally got around to doing that

I think the core idea behind what I found is like: 15% +- 10% of tasks / time / resources should be devoted to situation-analysis, exploring ways to do things, and organizing, including ways to parallelize tasks. There’s no universal way around devoting time to exploring how to parallelize a task. And, generally, all individual tasks can be split into a series of tasks that looks like: Explore / organize it -> Make it -> Extract the best parts from it -> Clean it all up. The exploration / organization part in general has an effect sort of like software testing: you can make do without it, but you end up with an organizational debt that makes your project more brittle, and you feel it hard later on

Another interesting way things are compressed is with generalizations. Statements like 'Fido has a tail', 'Fido has ears', etc 'Buddy has a tail, 'Buddy has ears', etc etc 'Fido is a dog', 'Buddy is a dog' can be compressed into 'Dogs have tails', 'Dogs have ears', etc 'Fido is a dog', 'Buddy is a dog'. And this is almost certainly one of the reasons why analogical comparison, classification, etc are advantageous to evolve: because it takes fewer resources when you use generalizations!

It might not be provable in general (hah) but my feeling is that (itc of intelligence) you always get generalization when you compress things, and visa versa

Show thread

A noise-correcting function F that takes an input sequence with random errors in it, and produces an output with the errors corrected, is another type of map that reduces the effective size of an input set. For example: all the possible character sequences constituting correct and meaningful english text but with misspellings. Since each sequence without misspellings has many possible misspellings, the number of sequences with misspellings is greater than the number without, and so an F which corrects the misspellings would be compressing the input set (with misspellings) to a much smaller set

Show thread

One way to make a map M more efficient is to assume it is a composition of maps, and make each constituent map more efficient. For example, lets make M into a composition Mb * Ma, where Ma takes all n^1000 possible inputs and produces n^100 possible outputs, and Mb takes those n^100 possible outputs and produces M's correct output. If Ma is fast and Mb's speed class is the same as the original M's speed class, but lower, then M's new speed is almost certainly less than its original speed. In this case, your effectively weeding out (using Ma) the inputs that don't contribute to the output. All the inputs that have a low probability of producing a given output are filtered out by Ma

Show thread

A perfect bayesian agent would be able to predict the correct probabilities for possible outcomes based on all of the information its seen so far, regardless how huge and complex what its predicting is, and how much information its seen. And in terms of machine learning we can actually make such an agent with a simple 2-deep ANN, but the computation costs to do so for even simple systems can be extreme. The challenge is really how to make such an agent efficient

Show thread

In fact, almost all of those length-1000 input sequences are meaningless noise. This is also true for images: imagine an image with most of it covered up, the little bit you can see is comprehensible, the set of possible images you infer the whole image could be is huge, and most of those images are random noise save for the part you can see. In terms of machine learning, a more efficient map M would be one that can quickly weed out those possible input sequences that don't effectively contribute to inferring the output

Show thread

For a map M from an input set X to an output set Y, I'd imagine that for a given particular output y, the set of inputs mapping to y that have probabilities greater than some reasonable threshold value, is typically tiny compared to the size of X. I want to say: ie: output-conditional input set probabilities are typically highly modular (this might not be an accurate compression of the first sentence). For instance: in text prediction given some large input sequence (say the last 1000 characters), for some predicted character y, almost all of the n^1000 possible input sequences have probabilities essentially equal to 0, but there is some relatively small set of possible input sequences with probabilities much closer to 1

Long. Me talking about the programming language (SetTL) I'm developing 

Here’s a snippet of an e2ee (end-to-end execution) test for SetTL – the language I’m writing with set-algebra-isomorphic typing. Its in the earliest stage of development right now, and I haven’t added any of the actually unique features I’m developing yet. But this snippet is interesting itself, I think. Its syntax is guaranteed to be unattractive to most people for funny reasons: its sort of like a lisp (so unattractive to many, many people), but with the function heads on the outside of the parentheses, and the closing parentheses of multiline blocks on their own lines (blasphemous). And notice the inconsistent use of commas. Commas, semicolons, and newlines actually do absolutely nothing right now in the core syntax (they’re equivalent to a space), despite them being tokenized. They’re just immediately deleted after being tokenized. I’m a fan of adding extra information to aid in comprehension when programming (this is also one of the cool parts about tag types and set typing), so the commas help make the code comprehensible now (and I removed some here for demonstrative purposes)

do(  set(x, 100)  set(steps, 0)  while(>(x 0), do(    set(x, -(x 10))    set(steps, +(steps, 1))  ))  assert(==(x 0))  assert(==(steps, 10)))

This is the core syntax of SetTL and is a compromise between easily parsable syntax, and easily comprehensible syntax. Now I’m going to spend a few days / weeks (not weeks hopefully) incorporating some of the features of the extended syntax to make it faster and more smooth to program in. Specifically, I want to eliminate parethesis pairs so the actual path your cursor takes when writing or modifying any particular line is more linear. As it is you have to move your cursor around a lot when blocking any particular sequence of elements because you have to traverse the entire sequence to add the block opener and closer (ie: ( and ) )

In the (hopefully) near future, SetTL’s syntax will look more like this (when using extended syntax):

{  let x = 100  let steps = 0  while x > 0 {    x = x - 10    steps = steps + 1  }  assert x == 0  assert steps == 10}

Which involves, in no particular order:

  • Line call parsing: [\n,;]? foo x1 x2 ... xn [\n,;]? eq to foo(x1 x2 ... xn) in do blocks {...}
  • Line call parsing: [,;]? foo x1 x2 ... xn [,;]? eq to foo(x1 x2 ... xn) everywhere else
  • Curly brace do block replacement: {...} eq to do(...)
  • let(... = ...) normalization to set(... ...) (note: let will have a lot more power in the future if all goes well)
  • Infix operations a + b eq to +(a b)
  • Chained infix flattening x1 + x2 + ... + xn eq to +(x1 x2 ... xn) instead of +(x1 +(x2 +(x3 ...)))

These additions will make commas, semicolons, and newlines act like proper separators, so for instance you’ll be able to do set x 100; set steps 0. And the core syntax will still always work, so even without line call separators you could do set(x 100) set(steps 0)

If you have any questions or are interested in SetTL, feel free to talk to me :)

I tested how long it takes to entirely remove all nodes from a randomly constructed tree such that if you remove a parent node all child nodes below it are also removed, and it seems that on average the removal-count-complexity is O(sqrt(n)) where n is the number of initial nodes in the random tree. If you remove the nodes one by one, with no subtree removal on parent removal, then it takes n steps to remove all the nodes

Note: the O(sqrt(n)) complexity depends on what type of random graph is constructed; I built the random graphs for this test by adding a new child to a uniformly-selected random node in the graph

Now to try and find why this is true, theoretically (I imagine its either really simple or really complex)

Show thread

long and stupid 

(Written while programming today. Ignore this post!)
Programming today:

Writing a node replacement function replace_node_with_new for my python shallow tree library

I find I need to be able to remove each child subtree from the to-be-removed node (or else I might end up with an accidental forest)

So I have to make a subtree remover function remove_subtree. But its probably a better idea to change the existing remove_node function so it has an option to remove child nodes too recursively (thus removing the subtree). Unfortunately, I could use recursion, but some subtrees might be potentially really deep, so I’ll have to make a leaf-first node index generator to iterate a subtree so I can remove it without function recursion. Which is tricky because if the node you yielded is mutated (eg: removed) after the yield, and you try to get properties from that node, then that will fail or result in undefined behavior


Ok, finished with the subtree node generator and test. And with remove_node, and replace_node_with_new. And I’m mostly finished with a replace_node that I also wrote but ran into a bug, so I’ll shelve that function for now. For reference, this all took 2 hours

Ran into some test failures with remove_node, which I solved by collecting all the traversed child subtree nodes and then just nullifying them in the tree (setting them to None nullifies them itc), which is much faster (and evidently less error prone) than calling disconnect_parent, etc on each child. While doing this, I noticed my random tree building and destroying remove_node unit test much more quickly destroys the tree it builds when it picks a random node and removing that node and its subtree, rather than just removing the node itself. This makes sense, but now I’m wondering how many random selections and remove_node calls with subtree removal it takes to destroy a tree on average. I might write a script to test this later


And now that i have replace_node_with_new, I can move on to the completing greater_than_ef which is what SetTL’s draft executor calls when it encounters a > function (or it will, when I finish it), and needed replace_node_with_new to be finishable

Ok, I finished greater_than_ef and it seems to work, but I discovered the executor now is skipping certain parent call nodes for those nodes’ parents, instead of calling the call nodes and continuing to their parents

I found that this problem was because I wasn’t returning True from print_ef, which is an external function that removes itself during its execution, and so it also advances the execution head during its execution. External functions like print_ef can advance to the next executable node on their own, and if they don’t advance the executor will advance after they’re done, but to signal to the executor that an external function has advanced, and the executor shouldn’t advance automatically, the external function should return True, or something else not None. Since I wasn’t returning True, the print_ef function was advancing, and then the executor was advancing automatically as well, so every time I called print_ef while testing greater_than_ef it would skip over one node during execution

Anyway, I fixed that, and after fixing it I discovered that I can cheat a little with simple external functions and made generic_fn_ef which creates a closure that acts like greater_than_ef and a lesser than ef and all other n-ary operations, and all other kinds of functions otherwise that just take arguments and return a result. So I can even replace print_ef, and probably any other function that doesn’t mess around with the stack or have to return non-value nodes, or whatever.

This took around 2 hours


Now since I have some condition to test with (5 < 10): onto testing the implementation I wrote yesterday of if_ef – which is the analog of if, if-else, if-else-if, etc statements

I’m immediately running into the issue that the 3 print statements in the if statement I’m testing are all printing, when only one should print depending which condition in the if arguments is evaluating to true.

..
After about 2 hours of running into multiple small bugs, if_ef apparently works correctly

A circular definition for sublimate properties: a property that cannot be explained in terms of non-sublimate properties

If we assume the Reality exists explicably, then its existence must be sublimate since we cannot otherwise explain its existence without a composition of non-sublimate axioms (which themselves are inexplicably fundamental), or infinite regress (which, incidentally, potentially involves sublimate processes)

Theoretically, explanations like the Omega Model implies an infinite regress of further higher-order sublimate property categories. Note: the Omega Model (OM) is the model that there is no false thing; that literally everything is true, and exists. This presents some strangenesses like physicalism and dualism both being true simultaneously, and contradictory and paradoxical things also being unequivocally true; but otherwise the OM not-so-neatly sidesteps the problem of universal specificness (the problem that the Universe in particular and the Reality in general is limited to a specific form, rather than being completely unlimited; note: it may not be, even without the OM, and this implies sublimate properties as well)

At a looser-level, conceptually, sublimate things are fundamentally (even: beyond fundamentally, sublimately) distinct from non-sublimate things in that there are no non-sublimate things that can combine to make a sublimate thing. They’re sort of like stepping into a different dimension of information and ontology. No matter what you do, you cannot escape your N dimensions, but there may be some other things that exist outside your dimension, and they may make your dimension possible (eg: a ball making your sphere surface possible). Though, all analogies necessarily fail absolutely and completely to explain sublimate things, for obvious reasons

A further note: the whole sublimate / not sublimate distinction sure harkens to the divine / not divine distinction

Anyway, I like me some pizza!

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.