Show more

So I usually speak against just letting companies have your data. This is one of the reasons why. Data might seem harmless, but it is often rich with inference-able structures. Not to mention this kind of data does not deprecate.

Original research
arxiv.org/abs/1611.04135

Article overview with less math-speak
technologyreview.com/2016/11/2

So..

- Your face alone could give away that you are an independent thinker, had an atypical childhood, or anything that puts a criminal record on you in China.

- Law abiding citizens in tend to all look like each other.

The 21st century is weird. It seems like every time I think of a cool research direction, I look for some literature and find a whole team of people doing just that.

Yeah, there are totally 7+ billion of us. So, for example, over 140 million people could get into Mensa. Not that Mensa is a criteria for intelligence itself. But it kind of demonstrates the point that there are potentially a lot of smart people out there.

Science is being done at a rapid pace. So much stuff is being done that people in the same field cannot keep up on the literature and end up accidentally repeating research results.

I think there is plenty of value to not trying to be original in the mess of it. A researcher probably could take the time to be thorough, or to organize and digest.

The ergonomics of modern proof writing just get better and better.

This is just a proof about a C program. Nothing in the program is important from a philosophical standpoint. But the logic and implementation is swell.

The logic is called higher-order impredicative concurrent separation logic. The separation part allows me to only have to think about small parts of the C program at a time. Multiple people can work on the same proof without stepping on each other.

The implementation allows for a lot of automation in proof writing. Most of the heavy lifting and boring stuff, like interfacing integers to registers of 64 bits are done. There is plenty of room to create more automation for common proof strategies.

*mathematical sophistication intesifies*

dl.acm.org/doi/10.1145/3297858

A more secure hardware design with risc-v

If it does not have an objective function, it's not really intelligence.

But most things have this. So most things are intelligent. Much like most things are computers via the finite state machine model.

I think the reason some political groups distrust corporations is because such corporations count as a different kind of intelligence.

Some of these reasons are also similar to the distrust of AI.

I kind of wonder about software 2.0 (ai and logic based systems) and education that teaches people to code. I was lucky to learn a lot of math and AI in school. Most programmers may have a hard time finding work a decade from now.

Proofs about the correctness of C code are now much easier. There is a software tool chain using "Verifiable C" to make systems safer.

Here is a tutorial on it, and general proof code writing. It is fully verified in Coq.

softwarefoundations.cis.upenn.

Maybe 100 years ago mathematicians could hide in ivory towers and treat the subject like a liberal art. But in 2021 engineers are right at the gate.

Engineers want more algebras for their intricate software, more analysis for their haptic feedback systems, more topology and algebraic geometry for their robots and artificial intelligence systems.

If mathematicians do not publish about it, engineers will publish. Where mathematicians refuse to be rigorous, engineers and logicians will invent languages that automate writing proofs.

It is the end of an era. It is like seeing the Catholic religion split into the multitude of other Christian religions.

You are a cyborg.

Take good care of yourself by keeping your software bug free and up to date.

What even IS intelligence?

Knowing optimizations in a space is just mathematical knowledge. You don't have to be intelligent to pretty much utilize everything that would count as intelligence.

So what could it be in 50 years? Possibly ownership of GPU cores and the latest and greatest optimization software.

Have to say I am pretty fond of systems science, particularly software engineering.

Programs are these formal structures. They do not rust or anything like that. But they still age in a way that all complex systems age.

One such consequence of this is that systems with a hardy design can last pretty much forever. There is a lot of "brown field" and a bunch of time is spent rebuilding especially important systems out of stronger logic.

But two areas that software seems to change, even in such bullet proof systems, is security problems and new hardware. Interfacing can mitigate upkeep costs from hardware growth. But security issues tend to cause rewrites. What fixes security upkeep requirements are small edge cases of mathematical purity. But I think most systems can never be so clean.

Curve25519 implementation is both the most blessed and cursed code in the world.

On one hand, it has the cleanest math and attracted the best minds to work on it. Now the code deployed in most projects have been fully optimized, and formally verified to be mathematically correct and secure. One of the biggest accomplishment in crypto applications in recent years.

On the other hand, the monstrous, machine-generated assembly or C code will give any unsuspected programmer a heart attack.

Seriously, "you are not expected to understand this."

git.kernel.org/pub/scm/linux/k

Anybody here interested in mathematical logic?

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.