There is so much research to do.
Including people doing research outside of academia I would approximate there are about 25 million researchers worldwide. Not sure what it was 100 years ago, but I only see that number growing.
https://www.richardprice.io/post/12855561694/the-number-of-academics-and-graduate-students-in
Huh, that is kind of cool. Removing noise for physical sensors. It is like learning to see at a base level.
I also have hyperfocus. So for somethings I have really good small scale problems solving.
I noticed something the other day. My intelligence is different.
For the moment I will say I am not just stupid. Scatterbrain thought is a huge handicap, but it lends itself to what I would call large scale problem solving.
In small scale problem solving, attention span is really important, a large working memory means being able to have more complex concept interactions. I, and probably most other people get around this by, "chunking" blocks of thought. But it very noticeable when someone is possibly more intelligent than other people, because they do not require a bunch of chunking to handle new stuff. So their ability to handle new ideas is both faster and broader.
In large scale problem solving, attention span is not as important. There is time to write down complex objects and interactions. What is more helpful is what I will describe as creative ability. This comes down to two parts. The first is the ability to gather a lot of information and do synthesis on it. This lends itself to the ability to recognize morphisms. "x is an example of y, so some set of tricks in y can be used in x". The second part is a randomness of thought. A strictly structured thought process gets stuck in local spaces of a problem too readily. And a natural evasion of distractions can also lead to generated logical spaces to overfit, because it is assumed, without knowing the actual theory, that more precision is more scientific or intellectual.
Scatterbrain behavoir is a trade off, not a total loss.
I think it is kind of interesting that assembly and forth are regular expression tier languages (aside from beefy modern macro assemblers). They can be done by scanners alone. The theoretical top speed of compilation is trivial for these. It really is good enough to get work done. Nobody actually needs to include infinity in the programs search space. The grammar level can be really simple. Even something like java byte code has machine independence.
But optimization and correctness steps of a compiler are really desirable. And that where this dream of simplicity dies. optimization and correctness use a lot of potentially exponential time algorithms, or may not successfully terminate for every problem. They also allow for more intricate grammars, far beyond what a parse tree covers. Languages like C are theoretically much slower due to their lack of expressiveness of these grammars. And that might become even more obvious a few decades from now as more people get into developing optimizations for higher level language compilers.
Then there are neural network based solutions, or solutions that come out of higher math proofs, which have an even higher level infinity to their search space. Everybody wants safe and fast code, but it is impractical to learn all of it. These kinds of optimizations and correctness additions are coming out of bodies of research. And this is also kind of a problem because languages could be made impossible to specify outside of using the compiler as the specification.
There are scanner generators. There are parser generators. But why do we avoid semantics generators? What would be a good language to specify this part of a compiler?
Reasoning About Recursive Tree Traversals
https://engineering.purdue.edu/~xqiu/ppopp2021_authorversion.pdf
Commutativity and associativity are some weird properties.
Like a few mathematicians I know like commutative algebra quite a bit more than the non-commutative stuff. Symmetry is nice because it is simple. But the basic element of vector spaces, and grammars, is that they are naturally non-commutative. It is like there is something about non-commutativity that does not scale I guess.
And associativity properties is basically the entire parallel programming research field in a nutshell. (at least the PL side, electronics is different)
Eh, I guess a lot of things are age old algebra with a different coat of paint.
It is weird how AI has a lot to do with programming language theory. Everything is always blending together in the formal sciences. The actual boundaries between topics are nuanced.
It is kind of like the feeling that math is all one subject, even though to an undergrad something like analysis and algebra might feel miles apart.
Types are Internal $\infty$-Groupoids. (arXiv:2105.00024v1 [cs.LO]) http://arxiv.org/abs/2105.00024
I am pretty curious about how to use automated reasoning systems to help discover new things, use and verify old ideas, and generally make my life easier.
Current events I try to keep up on
- Math Logic community (The Journal of Symbolic Logic)
- Statistics community (JASML, AoS)
- Algebra community (JoA, JoAG, JoPaAA, SIGSAM)
- Formal Methods community (CAV/TACAS)
Passing the learning curve up to current events
- Abstract Algebra (Dummit, Foote)
- Commutative Algebra (Eisenbud)
- Algebraic Geometry (Hartshorne)
- Mathematical Logic (Mendelson)
- Model Theory (Marker)