Even better:
> _“You can put a #number on anything if you try hard enough (number quality not guaranteed, see store for details).
Once you put a number on something, you improve your understanding and decision making (even if the number isn’t of prime quality). At the core of this belief is the idea that the world we live in is made of #math, however literally you decide to take that statement. Whenever a field of #science achieves any useful knowledge of that world, it is usually in form of precise mathematical equations or careful #statistics. Every science is an exact science, or trying to be.”_
@tripu "The particular brand of stupidity on display also points to another signal vanity of our time: the conviction that if you measure things enough, you can control them." -JH Kunstler http://bit.ly/1B1VhBx
I completely agree. Measuring isn't the same as shaping or controlling. I don't have the time to read that post, so I'm not sure what's the connection to my quote.
@tripu "The causal relationships between factors in nature are just too entangled for man to unravel through research & analysis. Perhaps science succeeds in advancing one slow step at a time...because it does so while groping in total darkness along a road without end, it is unable to know the real truth of things. This is why scientists are pleased with partial explications and see nothing wrong with pointing a finger and proclaiming this to be the cause and that the effect." -Masanobu Fukuoka
I agree that quantifying and modelling more often _may_ lead to confidence bias, overconfidence, Dunning-Kruger, or problems of the sort, sometimes.
But that is very weak criticism of my proposal, since _every_ system or tool you use (or lack thereof) could potentially give you a false sense of confidence. Don't people with religious convictions have overconfidence? Don't people who rely mostly on tradition, social norms or intuition have biases (eg, desirability bias) and poor understanding of issues ?
The question for me (and other rationalists) is **whether people and institutions would be better off, in general, using maths more often**, in the form of stats, estimates, cost-benefit analyses, decision matrices, etc.
My impression is that the vast majority of people would benefit from putting numbers on things more often.
@tripu you're gonna attempt to replicate Nature in a "lab?" The maths tells us "coupled systems cannot be magically decoupled..."
"Scientific farming has isolated the factors responsible for yield & found ways to improve each of these. But although science can break nature down & analyze it, it cannot reassemble the parts into the same whole. What may appear to be nature reconstructed is just an imperfect imitation..." -Masanobu Fukuoka
@tripu "the absence of religion... replaced by all kinds of crazy beliefs... you realize there's no religious fundamentalism that's more irrational than an atheist's primitive use of probability" -NN Taleb http://bit.ly/2Hi4pNK
@tripu Kary Mullis, 1993 Nobel Prize in Chemistry for inventing PCR testing... science is not a belief system: "Most of the people doing science should not be there. In fact, children should not be encouraged to go into science... avoid it unless they just can't stand not being a scientist. It's a hard job (not suited for people with beliefs)" https://bit.ly/33HJJL6
> _“There's no religious fundamentalism that's more irrational than an atheist's primitive use of probability.”_
This is hyperbolic, unfounded, and obviously false.
So: you take a random atheist with a basic knowledge of probability (myself, for instance) and the way they (I) use probability is _less rational_ than the worst religious fundamentalism imaginable (eg the Aghori, Aum Shinrikyo, Children of God, or Jihadism).
I honestly don't know how to argue against — or in favour of — this ridiculous idea, obviously wrong as it is.
@tripu when are your Gaussian assumptions, i.e "basic knowledge of probability," appropriate? Why is your basic knowledge of probability better than historical cultural & religious practices, and how is that less extreme than the extremism you cite?
Religion & culture probably keep most people out of trouble most of the time... "better to do what you cannot explain than to explain what you cannot do..."
Our claim is that probability (math, numbers) is appropriate (useful, valuable) in almost all circumstances.
You just swapped surreptitiously “[irrational] religious fundamentalism” with “historical cultural & religious practices”. There are huge differences.
Math is better at describing reality with accuracy, communicating unambiguous information, and predicting the future than less rigorous systems — and _everything_ is less rigorous than math.
Math (science, in general) can (and does) incorporate insights from other areas of life. You can quantify or estimate almost anything, crunch the data, find patterns and correlations, and model aspects of reality. There are studies modelling religious experience, subjective well-being, inflection points in History, whatever. The opposite is not true: Islam can't evaluate a new method to improve the reproduction rates of fish in captivity, Danish culture can't provide an estimation of the future impact of a certain monetary policy, Western tradition can't know how to reduce heat dissipation in electric batteries, etc.
I never denied the usefulness of system 1 thinking, social norms, or heuristics. Of course all that has value. I'm saying that putting numbers on things and manipulating those numbers with the tools of mathematics almost always _adds_ value and helps in understanding and decision-making.
@tripu lol, you're the one who strawman with religious fundamentalism w/o ack'ing the atheists own fundamentalist tendencies w/probability in the post I replied to: "you take a random atheist with a basic knowledge of probability and the way they use probability is less rational than the worst religious fundamentalism imaginable."
"Survival comes first, truth, understanding, and science later" https://scribe.rip/incerto/how-to-be-rational-about-rationality-432e96dd4d1a
@tripu "The causal relationships between factors in nature are just too entangled for man to unravel through research and analysis. Perhaps science succeeds in advancing one slow step at a time, but because it does so while groping in total darkness along a road without end, it is unable to know the real truth of things. This is why scientists are pleased with partial explications & see nothing wrong with pointing a finger & proclaiming this to be the cause & that the effect." -Masanobu Fukuoka
@tripu Kary Mullis, 1993 Nobel Prize in Chemistry for inventing PCR testing... science is not a belief system: "Most of the people doing science should not be there. In fact, children should not be encouraged to go into science... avoid it unless they just can't stand not being a scientist. It's a hard job (not suited for people with beliefs)" https://bit.ly/33HJJL6
I answered to that very quote [here](https://qoto.org/@tripu/108601123340223313) and [here](https://qoto.org/@tripu/108605903537015769). I have the impression you keep on throwing quotes instead of engaging with arguments.
I think I have presented a nuanced view, an opinion that is far from any extreme (_“quantifying and modelling more often **may lead to confidence bias, overconfidence, Dunning-Kruger, or problems of the sort**”_; _“probability […] is appropriate […] in **almost all** circumstances”_; _“I never denied the usefulness of system 1 thinking, social norms, or heuristics; of course **all that has value**”_). I still don't understand where you disagree, specifically.
@js290
A meta idea to further illustrate my point:
I made a bold claim there (that people would understand things better and make better decisions using numbers more often). You seem to disagree. How could we resolve that question?
One way would be to use data and maths. For example we could design lab experiments and surveys where we try to assess whether participants understand something better, or choose better alternatives, with and without numbers. We could prime participants to rely on different systems (guts, tradition, peers, stats) and see how they perform. We could test them for the same thing in slightly different scenarios. We could look for natural experiments where certain institutions or individuals made decisions under comparable circumstances, except for the availability or absence of mathematical models or estimations. And so on. We would then collate results, control for spurious variables, average and weigh, and arrive at an (always imperfect and always temporary) conclusion, and then settle the question for the time being.
Another way to test my bold claim would be… _anything else_: personal experience, anecdotes, opinions.
If you agree with me that the first method would be more useful or reliable than the second one, you kind of agree with my initial claim already.