@sparkinstech sounds like fun. 😊
@AmpBenzScientist @freemo @Science
I feel successful. What are you trying to imply? Lol
@freemo @AmpBenzScientist @Science
Lol yep, not all math. The math many phycists use is both old and ugly.
@freemo could you share the script?
@duponin And then donate to the authors, if you're financially able.
@niconiconi You can try guix on top of openbsd. Secure core, low fuss dependency system.
I also have hyperfocus. So for somethings I have really good small scale problems solving.
@Mote I have medicine for it. That happened to be when I was supposed to take another dose.
I am pretty concerned about my intellectual ability, and career, if the drug ever stopped being given out. I can't just ape an attention span.
@freemo Admittedly, it is a bit hard for me to function, if I have been lost in my work for too long.
It came across as stupid. We were working on a paper together. I had to put two sections of text right next to each other so I could remember one section long enough to adapt it over. And at the same time they were giving me lists of tasks, and random questions. Probably not a reasonable situation late at night without coffee. But it felt odd because they were handling this much bandwidth just fine. I was not.
I noticed something the other day. My intelligence is different.
For the moment I will say I am not just stupid. Scatterbrain thought is a huge handicap, but it lends itself to what I would call large scale problem solving.
In small scale problem solving, attention span is really important, a large working memory means being able to have more complex concept interactions. I, and probably most other people get around this by, "chunking" blocks of thought. But it very noticeable when someone is possibly more intelligent than other people, because they do not require a bunch of chunking to handle new stuff. So their ability to handle new ideas is both faster and broader.
In large scale problem solving, attention span is not as important. There is time to write down complex objects and interactions. What is more helpful is what I will describe as creative ability. This comes down to two parts. The first is the ability to gather a lot of information and do synthesis on it. This lends itself to the ability to recognize morphisms. "x is an example of y, so some set of tricks in y can be used in x". The second part is a randomness of thought. A strictly structured thought process gets stuck in local spaces of a problem too readily. And a natural evasion of distractions can also lead to generated logical spaces to overfit, because it is assumed, without knowing the actual theory, that more precision is more scientific or intellectual.
Scatterbrain behavoir is a trade off, not a total loss.
@bonifartius for the last part. I think we will exhaust the easier problems, the sophistication of neural nets research will grow. Explanations of these things can get pretty fancy already.
@bonifartius On the second part, there is research being done on making neural nets safer to work with. Probability based logics exist as well. So while I am not certain the research will pay off, I am betting it will be pretty successful.
https://trac.csail.mit.edu/
https://www.cs.purdue.edu/homes/roopsha/purform/projects.html
I think it is kind of interesting that assembly and forth are regular expression tier languages (aside from beefy modern macro assemblers). They can be done by scanners alone. The theoretical top speed of compilation is trivial for these. It really is good enough to get work done. Nobody actually needs to include infinity in the programs search space. The grammar level can be really simple. Even something like java byte code has machine independence.
But optimization and correctness steps of a compiler are really desirable. And that where this dream of simplicity dies. optimization and correctness use a lot of potentially exponential time algorithms, or may not successfully terminate for every problem. They also allow for more intricate grammars, far beyond what a parse tree covers. Languages like C are theoretically much slower due to their lack of expressiveness of these grammars. And that might become even more obvious a few decades from now as more people get into developing optimizations for higher level language compilers.
Then there are neural network based solutions, or solutions that come out of higher math proofs, which have an even higher level infinity to their search space. Everybody wants safe and fast code, but it is impractical to learn all of it. These kinds of optimizations and correctness additions are coming out of bodies of research. And this is also kind of a problem because languages could be made impossible to specify outside of using the compiler as the specification.
I am pretty curious about how to use automated reasoning systems to help discover new things, use and verify old ideas, and generally make my life easier.
Current events I try to keep up on
- Math Logic community (The Journal of Symbolic Logic)
- Statistics community (JASML, AoS)
- Algebra community (JoA, JoAG, JoPaAA, SIGSAM)
- Formal Methods community (CAV/TACAS)
Passing the learning curve up to current events
- Abstract Algebra (Dummit, Foote)
- Commutative Algebra (Eisenbud)
- Algebraic Geometry (Hartshorne)
- Mathematical Logic (Mendelson)
- Model Theory (Marker)