Show newer

There is so much research to do.

Including people doing research outside of academia I would approximate there are about 25 million researchers worldwide. Not sure what it was 100 years ago, but I only see that number growing.

richardprice.io/post/128555616

Another day.. another 10 hours to dump into type theory studies.

For the logic gods.

Huh, that is kind of cool. Removing noise for physical sensors. It is like learning to see at a base level.

arxiv.org/pdf/2106.11936.pdf

Bertrand had it great. He never had to worry about money. He pretty much just did what he wanted all day.

I also have hyperfocus. So for somethings I have really good small scale problems solving.

Show thread

I noticed something the other day. My intelligence is different.

For the moment I will say I am not just stupid. Scatterbrain thought is a huge handicap, but it lends itself to what I would call large scale problem solving.

In small scale problem solving, attention span is really important, a large working memory means being able to have more complex concept interactions. I, and probably most other people get around this by, "chunking" blocks of thought. But it very noticeable when someone is possibly more intelligent than other people, because they do not require a bunch of chunking to handle new stuff. So their ability to handle new ideas is both faster and broader.

In large scale problem solving, attention span is not as important. There is time to write down complex objects and interactions. What is more helpful is what I will describe as creative ability. This comes down to two parts. The first is the ability to gather a lot of information and do synthesis on it. This lends itself to the ability to recognize morphisms. "x is an example of y, so some set of tricks in y can be used in x". The second part is a randomness of thought. A strictly structured thought process gets stuck in local spaces of a problem too readily. And a natural evasion of distractions can also lead to generated logical spaces to overfit, because it is assumed, without knowing the actual theory, that more precision is more scientific or intellectual.

Scatterbrain behavoir is a trade off, not a total loss.

Trying not to let on that I have ADHD to a colleague. I was starting to wonder if I count as sentient.

Well, birthday is coming up. Please slow down, time... 😅

I think it is kind of interesting that assembly and forth are regular expression tier languages (aside from beefy modern macro assemblers). They can be done by scanners alone. The theoretical top speed of compilation is trivial for these. It really is good enough to get work done. Nobody actually needs to include infinity in the programs search space. The grammar level can be really simple. Even something like java byte code has machine independence.

But optimization and correctness steps of a compiler are really desirable. And that where this dream of simplicity dies. optimization and correctness use a lot of potentially exponential time algorithms, or may not successfully terminate for every problem. They also allow for more intricate grammars, far beyond what a parse tree covers. Languages like C are theoretically much slower due to their lack of expressiveness of these grammars. And that might become even more obvious a few decades from now as more people get into developing optimizations for higher level language compilers.

Then there are neural network based solutions, or solutions that come out of higher math proofs, which have an even higher level infinity to their search space. Everybody wants safe and fast code, but it is impractical to learn all of it. These kinds of optimizations and correctness additions are coming out of bodies of research. And this is also kind of a problem because languages could be made impossible to specify outside of using the compiler as the specification.

There are scanner generators. There are parser generators. But why do we avoid semantics generators? What would be a good language to specify this part of a compiler?

Reasoning About Recursive Tree Traversals
engineering.purdue.edu/~xqiu/p

Commutativity and associativity are some weird properties.

Like a few mathematicians I know like commutative algebra quite a bit more than the non-commutative stuff. Symmetry is nice because it is simple. But the basic element of vector spaces, and grammars, is that they are naturally non-commutative. It is like there is something about non-commutativity that does not scale I guess.

And associativity properties is basically the entire parallel programming research field in a nutshell. (at least the PL side, electronics is different)

Eh, I guess a lot of things are age old algebra with a different coat of paint.

The most successful area of AI is compiler theory.

I pretty anti-intelligence for someone that works in artificial intelligence.

Intelligence is not false. Just so much of the idea of it is an anthropomorphism.

It is weird how AI has a lot to do with programming language theory. Everything is always blending together in the formal sciences. The actual boundaries between topics are nuanced.

It is kind of like the feeling that math is all one subject, even though to an undergrad something like analysis and algebra might feel miles apart.

A lot of what I think makes a person seem smart, is their belief that they can overcome intellectual obstacles.

I guess I am publishing this semester after all. lel :blobwizard:

Types are Internal $\infty$-Groupoids. (arXiv:2105.00024v1 [cs.LO]) arxiv.org/abs/2105.00024

I leave the house for 5 minutes and I hurt my back. Guess what I am doing for the rest of the day.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.