U.S.A citizens do decide on these things.
US made firearms are registered with serial numbers built into the gun. There are licenses and training required to legally have one. These things did not always exist.
But if we look at the statistics, active shooters has been the same low numbers per year all the way back to the 1970's.
Less than 10 people on average per year for 300+ million is extremely safe.
https://www.statista.com/statistics/971473/number-k-12-school-shootings-us/
It is also none of other countries citizens' business.
Yeah. My friend has like 4 strategically placed throughout his house, so he could optimally defend against multiple invaders.
I am pretty fond of the math sorcerer. So much optimism.
Pretty nice intro to algebraic number theory
@Hyolobrika@mstdn.io
Ah here we go. There was a bit of an explanation was on his channel about each thing. Actual credit also goes to chessapig.
@GlowingLantern@mastodon.online @stux
Yeah, it is definitely a Windows people problem.
We once had a school computer system that printed from multiple OSes. The Linux and Mac were always fine. But as soon as a Windows machine was attached, it would bork the whole system.
Well that is sad. I did not want another shitty harem anime.
It is pretty great being intentionally conceived after my parents got married. Being an unwanted accident sounds like a mindfuck.
Good mathematics art is so underrated.
Credit to the K-theory YouTube channel.
https://www.youtube.com/channel/UCRXC6M65qiFlHaXGDNC--VA
@Hyolobrika@mstdn.io @ColinTheMathmo
Yeah and still does from philosophy. A lot of things took off in the 1850 - 1930 range, that gave math its own capacity to do more than be a foil for physics calculations, and its limited place as an element of liberal education.
Sadly, due to complexity and a lot of actors, cognitive barriers between fields rise up after a few generations. So, reinventing the wheel happens. But, historical precedent is not ownership. So rethinking about these topics, in a different context, is not too bad, even though it can be highly derivative.
More that, since around 1850, the same people that would be drawn into philosophy, are now also drawn into mathematics.
@paullammers@eldritch.cafe
Decompiling, and compilers in general, are obtaining a large boost from ML and DL methods. They have their own section at PL conferences now. In the greater scope, computer scientists have rediscovered how flexible real number computation is and are filling an intellectual gap.
DL especially is filling that gap. A lot of math research uses what are called modules, which is a generalization of vector spaces. Tensors are an example of modules. There is a lot of potential behind this. An area of active mathematics called representation theory uses modules and vector spaces to represent a lot of other structures. So for problems that are too complex to write by hand, such as optimization scheduling, an approximate solution could be obtained with tensor code.
So, I think DL as another style of programming, that gives up correctness in exchange for automatic representations, probabilistic solutions, and bypassing traditional computational complexity issues.
The intellectual gap will eventually fill, and DL papers will level off in output. People will start to understand general principles and not every new program will be considered paper worthy.
Uh oh. An open software is being pretentious again.
@paullammers@eldritch.cafe
No. Correctness of high level program output is a given, with known hardware and language rules. Quality is another thing.
The benchmark itself is vacuous. But the human readability of output is a useful idea. But, it is unmeasured except for an example comparison at the very end. Giving the training technique is a useful contribution, if the data for training is credible. Which it does not have a replication badge that ACM tends to give out, so the contribution there may be complete fluff as well. A lot of neural network studies lack the statistical verification to be counted as real scientific works, sadly. So I am also suspicious of them fudging data to give good scores.
To summarize: this is a neat idea, but it is full of practical issues, and in perspective of what exists, is not that helpful of a contribution.
@paullammers@eldritch.cafe
It does not seem like a contradiction. Although the paper is poorly done. Stating 0% in the abstract was misleading. The benchmarks are against other neural systems it looks like. They are at least decompiler tools I never heard of, which is suspicious. And the data is particular to similar coding styles and a lot of potential hidden issues, as deep neural networks tend to have.
But anyway, you can look up what a decompiler is, and how one works, for a refresher maybe? Decompilation has been an existing security technique for decades. It is just traditional code compilation in reverse, which does not allow for much ambiguity. You can use objdump on Linux for some easy assembler code, or a tool like this to get C code.
https://sourceforge.net/projects/decompiler/
What is your background? Are you interested in getting into security or programming language theory? Compiler writing is probably one of the most underated computer fields, considering how diverse computation hardware is becoming. And large compilers do have many security bugs. There is also attempts by SE researchers to make auto documenting code. This paper is kind of similar.
Something I used to do before I got a phone with close to a terabyte in storage.
I am pretty curious about how to use automated reasoning systems to help discover new things, use and verify old ideas, and generally make my life easier.
Current events I try to keep up on
- Math Logic community (The Journal of Symbolic Logic)
- Statistics community (JASML, AoS)
- Algebra community (JoA, JoAG, JoPaAA, SIGSAM)
- Formal Methods community (CAV/TACAS)
Passing the learning curve up to current events
- Abstract Algebra (Dummit, Foote)
- Commutative Algebra (Eisenbud)
- Algebraic Geometry (Hartshorne)
- Mathematical Logic (Mendelson)
- Model Theory (Marker)