3. Kuhn's work itself suggests options for a common language between people using different paradigms. He seemed to presume logic and deductive validity are common across all the paradigms, for example. And he gave 5 characteristics (accuracy, consistency, scope, simplicity, fruitfulness) for deciding between paradigms, and these 5 are a neutral basis for deciding rather than part of a paradigm.
2. All language is fuzzy, and (in the strongly held view of one pragmatist participant), even "truth is not an exact precise thing", and "all theories have some slop in them". So incommensurability is nothing unusual or specific to scientific paradigms, it's present in all our talk, and we're mostly quite good at grokking what each other mean.
Thomas Kuhn & problems of incommensurability between scientific paradigms
https://plato.stanford.edu/entries/thomas-kuhn/#IncoWorlChan
The Austin LessWrong philosophy working group discussed this yesterday. The main ideas discussed:
1. A sense that Kuhn focuses strongly on physics, and that paradigm shifts in many other fields are fuzzy or absent. e.g. in neuroscience or computer science, have there been any?
@gpowerf -
I think Yudkowsky would reject out of hand the argument that "we can easily switch off" a powerful autonomous system, since we don't know that to be true.
For myself, I think of that argument this way: First, we know present-day GPT-4 is not itself an agent, but that it can predict text responses of many kinds of people, and so in a clear sense is simulating those agents (at low fidelity) when prompted to do so. Second, we know present-day GPT-4 has already been hooked up to the Internet and prompted to develop and execute task lists suitable in the style of various kinds of people, and it did so by emailing people and convincing or hiring them to do things for it. Third, we know that training a system like GPT-4 currently requires vast resources, but copying it from the API can be done in a few hours for a couple hundred bucks, and simply running it can be done on a mobile phone. So my conclusion is that even present-day GPT-4 is fully capable of jailbreaking itself in a short timeframe if prompted in a suitable way. I see little reason yet to expect we'll find a way to stop a more powerful version from doing similarly and getting more creative about it.
I agree that Yudkowsky argues at length for "god-like" AIs, and this is the point where I disagree with him most. I think chaos, randomness, and fundamental computational limits prevent a physical system from being "god-like". But on the other hand I think it's clear that nothing in his argument depends on an AI being "god-like"; all that matters to the argument is that it be significantly smarter than humans.
As for misalignment, that's just the default. No technology is automatically safe regardless of design. It'll be aligned if we design it to be aligned, and if we design it otherwise then it will be otherwise.
I'm not nearly as confident of disaster as Yudkowsky. I just think disaster is an obviously possible outcome that we have no plan to prevent. I find it very annoying when people dismiss the risks as "skynet", as if they're just memeing movies rather than thinking through the consequences.
@gpowerf
On the other hand we have thinkers like Yudkowsky, published yesterday in TIME mag: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
I think it's a mistake to focus too narrowly on current small dangers like bias and privacy. We should also look ahead to where things are headed. And if we do that, we see we are currently racing to create powerful systems that we don't understand. We shouldn't just handwave away the potential danger with trite allusions to movies.
Question about github fork
@CanXClV - Your fork is a separate copy so it would continue to exist
@monotrox99
IIUC, rural areas would use lowband 5G. It's designed for low capacity and high range. The highband equipment in dense urban areas isn't the only 5G equipment.
@peterdrake
I like https://flowx.io/ . It has a lot of at-a-glance info.
I was stuck on a long flight with nothing much to do, & curiosity about logic tables for a 4-valued logic (true, false, both, neither) are what bubbled up to mind 🤷♂️. The tables that seemed right to me result turned out to be a logic that's called "First Degree Entailment". It has a nice overview here: https://link.springer.com/content/pdf/10.1007/s11225-017-9748-6.pdf
It's kinda fun to rediscover the very basics of a thing, and then get to learn all about it because other people have done the hard work 😁
@Litzz11
Horrifying and wonderful at the same time 😁
@natertot
It's a pretty nifty place. Welcome, & have fun 🙂
@volkris
If you don't mind continuing to indulge my curiosity --
Is there also a practical or strategic reason why a bloc of Dems don't offer to cross the aisle in this vote by seeking their own concessions? I could imagine, e.g., an agreement to share power in certain committees.
@volkris
Ah, that is an explanation that makes practical sense to me. Thanks!
On the other hand, it seems quite easy to find alternatives that would be more effective at the same goals, e.g. a vote using the Ranked Pairs method can still guarantee an internally divided party with even a slim majority elects one of their own, the one with the most support, on a single ballot.
@TNLNYC
If I understand correctly, the current rules say they can't change any rules till they elect a Speaker.
Hopefully they'll take a moment to fix the broken rules later.
@memes_1336
Is the point that she can't read well? 🙁 "Athletes" and "athletes under 35” are obviously very different cohorts. And her source is just some dudes' letter to an editor.
@volkris
It seems like an odd choice of rules. Do you happen to know the justification for it, or if not justification at least the origin?
a quiet nerd with a head full of ideals