Show newer

@exsangus @stux

And, far more people die from accidents in general than from attacking someone. And, the number of accidental deaths from driving is far higher than guns. And, accidental deaths to guns versus gun ownership is also extremely low.

One off statistics without context tend to be vacuous.

@stux

U.S.A citizens do decide on these things.

US made firearms are registered with serial numbers built into the gun. There are licenses and training required to legally have one. These things did not always exist.

@getindor

But if we look at the statistics, active shooters has been the same low numbers per year all the way back to the 1970's.

Less than 10 people on average per year for 300+ million is extremely safe.

statista.com/statistics/971473

@stux

It is also none of other countries citizens' business.

@stux

Yeah. My friend has like 4 strategically placed throughout his house, so he could optimally defend against multiple invaders.

@Hyolobrika@mstdn.io

No problem. It's just the YouTubers favorite random concepts.

@Hyolobrika@mstdn.io

Ah here we go. There was a bit of an explanation was on his channel about each thing. Actual credit also goes to chessapig.

youtu.be/8gKqE_WogIs

chessapig.github.io/gallery/

@GlowingLantern@mastodon.online @stux

Yeah, it is definitely a Windows people problem.

We once had a school computer system that printed from multiple OSes. The Linux and Mac were always fine. But as soon as a Windows machine was attached, it would bork the whole system.

@WashedOutGundamPilot

Well that is sad. I did not want another shitty harem anime.

@BanditHeeler

It is pretty great being intentionally conceived after my parents got married. Being an unwanted accident sounds like a mindfuck.

@Hyolobrika@mstdn.io @ColinTheMathmo

Yeah and still does from philosophy. A lot of things took off in the 1850 - 1930 range, that gave math its own capacity to do more than be a foil for physics calculations, and its limited place as an element of liberal education.

Sadly, due to complexity and a lot of actors, cognitive barriers between fields rise up after a few generations. So, reinventing the wheel happens. But, historical precedent is not ownership. So rethinking about these topics, in a different context, is not too bad, even though it can be highly derivative.

@ColinTheMathmo

More that, since around 1850, the same people that would be drawn into philosophy, are now also drawn into mathematics.

@paullammers@eldritch.cafe

Decompiling, and compilers in general, are obtaining a large boost from ML and DL methods. They have their own section at PL conferences now. In the greater scope, computer scientists have rediscovered how flexible real number computation is and are filling an intellectual gap.

DL especially is filling that gap. A lot of math research uses what are called modules, which is a generalization of vector spaces. Tensors are an example of modules. There is a lot of potential behind this. An area of active mathematics called representation theory uses modules and vector spaces to represent a lot of other structures. So for problems that are too complex to write by hand, such as optimization scheduling, an approximate solution could be obtained with tensor code.

So, I think DL as another style of programming, that gives up correctness in exchange for automatic representations, probabilistic solutions, and bypassing traditional computational complexity issues.

The intellectual gap will eventually fill, and DL papers will level off in output. People will start to understand general principles and not every new program will be considered paper worthy.

@mray

Uh oh. An open software is being pretentious again.

@paullammers@eldritch.cafe

No. Correctness of high level program output is a given, with known hardware and language rules. Quality is another thing.

The benchmark itself is vacuous. But the human readability of output is a useful idea. But, it is unmeasured except for an example comparison at the very end. Giving the training technique is a useful contribution, if the data for training is credible. Which it does not have a replication badge that ACM tends to give out, so the contribution there may be complete fluff as well. A lot of neural network studies lack the statistical verification to be counted as real scientific works, sadly. So I am also suspicious of them fudging data to give good scores.

To summarize: this is a neat idea, but it is full of practical issues, and in perspective of what exists, is not that helpful of a contribution.

@paullammers@eldritch.cafe

It does not seem like a contradiction. Although the paper is poorly done. Stating 0% in the abstract was misleading. The benchmarks are against other neural systems it looks like. They are at least decompiler tools I never heard of, which is suspicious. And the data is particular to similar coding styles and a lot of potential hidden issues, as deep neural networks tend to have.

But anyway, you can look up what a decompiler is, and how one works, for a refresher maybe? Decompilation has been an existing security technique for decades. It is just traditional code compilation in reverse, which does not allow for much ambiguity. You can use objdump on Linux for some easy assembler code, or a tool like this to get C code.
sourceforge.net/projects/decom

What is your background? Are you interested in getting into security or programming language theory? Compiler writing is probably one of the most underated computer fields, considering how diverse computation hardware is becoming. And large compilers do have many security bugs. There is also attempts by SE researchers to make auto documenting code. This paper is kind of similar.

@stux

Something I used to do before I got a phone with close to a terabyte in storage.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.