These are public posts tagged with #epistemology. You can interact with them if you have an account anywhere in the fediverse.
"This paper advances the critical analysis of machine learning by placing it in direct relation with actuarial science as a way to further draw out their shared epistemic politics. The social studies of machine learning—along with work focused on other broad forms of algorithmic assessment, prediction, and scoring—tends to emphasize features of these systems that are decidedly actuarial in nature, and even deeply actuarial in origin. Yet, those technologies are almost never framed as actuarial and then fleshed out in that context or with that connection. Through discussions of the production of ground truth and politics of risk governance, I zero in on the bedrock relations of power-value-knowledge that are fundamental to, and constructed by, these technosciences and their regimes of authority and veracity in society. Analyzing both machine learning and actuarial science in the same frame gives us a unique vantage for understanding and grounding these technologies of governance. I conclude this theoretical analysis by arguing that contrary to their careful public performances of mechanical objectivity these technosciences are postmodern in their practices and politics."
https://journals.sagepub.com/doi/10.1177/01622439251331138
#DataScience #STS #Insurance #Postmodernism #ML #MachineLearning #Risk #RiskGovernance #GroundTruth #Epistemology
Does awareness of intuition's fallibility help people avoid faulty intuitions?
#Teaching students #DualProcessTheory didn't help them avoid faulty intuitions about #physics problems.
https://doi.org/10.1103/PhysRevPhysEducRes.21.010135
#edu #cogSci #bias #debiasing #psychology #epistemology #rationality
Why Everything in the Universe Turns More Complex | Quanta Magazine
https://www.quantamagazine.org/why-everything-in-the-universe-turns-more-complex-20250402
https://news.ycombinator.com/item?id=43677232
#physics #thermodynamics #evolution #entropy #InformationTheory #philosophy #InformationScience #epistemology
A new suggestion that complexity increases over time,…
Quanta MagazineNew Paper Out!
Probabilistic Empiricism, by M. Suarez and myself, examines how probabilistic models representing objective propensities can be confirmed by induction on situations.
Now open access at EJPS
https://link.springer.com/article/10.1007/s13194-025-00653-5
The article is based on a fruitful combination of our past work: Mauricio's s Complex Nexus of Chance on the complicated relations between frequencies, probabilities and dispositions, and my Modal Empiricism on the epistemology of modalities.
Modal Empiricism in philosophy of science proposes…
SpringerLinkJon Baron shared Peter Wakker's annotated bibliography of #decisionTheory
> 9000 entries!
DocX http://personal.eur.nl/wakker/refs/webrfrncs.docx
PDF http://personal.eur.nl/wakker/refs/webrfrncs.pdf
BibTeX (no annotations, I merged redundancies) https://www.dropbox.com/scl/fi/vbkki82h62ydq0fol1g8d/Decision-Theory.bib?rlkey=84m8zx3tyaa4uy6p0zptkybnx&st=qx9myebb&dl=0
"the level of confidence we in fact adopt must be determined by something that is irrelevant to the reliability of the testimony on which it’s based. After all, everything that is relevant to reliability is already included in the evidence."
#PhilosophyOfScience #PhilSci #Philosophy #Epistemology #Evidence
Out of curiosity, I went to see when in fact "heed" dropped off in usage, and as I suspected, it was during The Enlightenment, just before the Revolutionary War. It tried to pull up again in the mid-19th century, then the 20th century put the nail in it. Again, probably because of its connections to the concept of obedience. To understand someone meant you would obey them, and after awhile that didn't seem so fun. (I just picture some angry old father screaming at his children to heed him or else.) Our society is FAR less authoritarian than it once was.
(I saw a YouTube video on outsider artist Henry Darger last night, and jesus we have it good. I'd like to keep it that way and make things even better.)
Wolfram|Alpha brings expert-level knowledge and capabilities…
www.wolframalpha.comEnglish conflates the concepts of "hear" and "understand." Many conflicts get nowhere because we use the common phrasing, "You're not listening to me!" or "You didn't hear me!" when what we really mean is, "You didn't get me, I want you to make sense of what I'm saying."
The process of comprehending what someone has said is different than hearing their words. How many times have you said, "You're not listening!" and they were in fact "listening" but not getting it? How many times have you said, "No I HEARD you?" when you did not, in fact, understand?
English used to have a snappy word for this: heed. To heed was to both hear AND to understand. And it also meant "obey" which might be why it fell out of favor (which itself reflects an interesting point of cultural values shift). We DO in fact conflate "listen" to obedience, sometimes, especially towards children. But not as much as once was.
The fact that all words mean multiple things, and that English has some issues with which things are conflated, can really influence how we think and interact. It's worth trying to unpack that. Then I start thinking towards how we can change English to be better.
Hello, I'm a Social Worker from Mexico City. I'm interested in studying my discipline from a #Science and Technology Studies perspective.
Some other interests are: #philosophy of social sciences, #epistemology, scientific #practices and #community social work.
I want to connect with other people in the social sciences and in the #humanities to broaden my network.
This is my dialnet webpage (in spanish): https://dialnet.unirioja.es/servlet/autor?codigo=5344286
See you all around!!
Perfil de Autor en Dialnet.
DialnetWhat is the characteristic wrong of testimonial injustice?
Richard Pettigrew, 2025
"When someone is in a position of power over you, you often need them to have accurate beliefs that only your testimony can reliably supply"
https://academic.oup.com/pq/advance-article/doi/10.1093/pq/pqaf034/8104779
If you understand Virtue Epistomology (VE), you cannot accept any LLM output as "information".
VE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.
VE prescribes that we should look to how the information was obtained, particularly in two ways:
1) Was the information obtained using a well-understood method that is known to produce good results?
2) Does the method appear to have been applied correctly in this particular case?
LLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true.
LLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.
Only a damned fool would use it for anything, ever.
#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity @philosophy
In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...:
"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.
(...)
Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.
(...)
Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119382
#AI #GenerativeAI #Chatbots #LLMs #Ignorance #Epistemology #Bullshit
Recently, I got several opportunities to discuss the reproducibility crisis in science. To help discuss that complex topic, we need to agree on a vocabulary.
My favorite one has been published by Manuel López-Ibáñez, Juergen Branke and Luis Paquete, and is summarized in the attached diagram, which you can also find here: http://nojhan.net/tfd/vocabulary-of-reproducibility.html
It's good that this topic is not fading away, but is gaining traction. "Slowly, but surely", as we say in French.
If you want a high resolution suitable for impression, do not hesitate to ask!
finally in print, the last puzzle piece in a decade of thinking about communication across social networks from the perspective of ideal rational agents: how do we factor in ‚dependence‘ - the fact that the same underlying evidence may, for example, reach us via multiple different reports, giving rise to double counting?
There is a strong intuition that multiple independent observations should carry more weight than dependent observations. But how much more?
We show that there is no (known) answer to this normative question in the general case. This renders a fundamental feature of human testimony unsolvable, meaning that acquiring knowledge via the testimony of others is much harder than typically assumed
My favourite online comparative theologian at ESOTERICA Dr. Justin Sledge has put out part four of his Demiurge series, would recommend if you're interested in how humans came up with all the crazy things we believe these days. Turns out wE lIvE iN a SimUlAtIoN is antique and creaky as well. https://www.youtube.com/watch?v=kq-CoIFf8l0 #Epistemology #Philosophy #ComparativeTheology #Occult #Religion
recommend this very short (valuable) read
reading recommendation #analogy
few things in general #epistemology are thus paper producing, but thus demanding and thus unhedged as the study of structures and roles of #analogy
as so often, a glance at (somehow/anyhow) corresponding research outside #philosopher's #wheelhouse may be helpful. here's a paper of the kind
Pari tuoretta OA-artikkelia sosiaalisen epistemologian journaalissa.
Pedro Schmechtig pohtii episteemistä paternalismia ja suojelevaa auktoriteettia, https://www.tandfonline.com/doi/full/10.1080/02691728.2025.2453942?src=exp-oa, eli millä ehdoin voisi olla oikein auttaa, jopa ohjata jotakuta "tietämään paremmin".
Mark Coeckelbergh puolestaan tarkastelee tekoälyn vaikutusta episteemiseen agenssiin eli tiedonmuodostuksemme itsenäisyyteen ja kriittisyyteen, https://www.tandfonline.com/doi/full/10.1080/02691728.2025.2466164?src=exp-oa#abstract
#epistemology #socialEpistemology #tieto #tietoteoria #filosofia #philosophy #agency #toimijuus #paternalism #ai #tekoaly #journal #knowledge #belief #new #bubbles #socialmedia
#Epistemology, derived from the Greek words for "knowledge" and "study," is the philosophical study of knowledge, examining its nature, sources, limits, and validity, essentially exploring how we know what we know.