Show more

5. The pragmatist strongly emphasized that science is supposed to be better than scientists. Particular scientists might be partisans of their theories, but in a field of diverse partisans, the ones with better theories will tend to be more fruitful. Enough new researchers will prefer to go into more fruitful areas that, over time, even a field of all partisans should converge on better theories. (Insert adage about science progressing one funeral at a time.)

I wondered if convergence is sufficient by itself to overcome incommensurability. Paradigms in philosophy, religion, & politics are harder to overcome than in science; what if two people who disagree simply followed this process?:

• They pick a topic and get a big piece of paper to write on.
• They take turns proposing interesting statements they think the other person might agree with.
• If they both agree, they write the statement down.
• They see what kind of consensus they can put together as they fill up the page.

Show thread

4. Scientists and especially engineers are typically realists about what they're working with. They might technically have incommensurable definitions of "mass", for example, but they get past it essentially by saying, "Let me show you what I mean", and getting out some materials and equipment and pointing at something they do.

It's not essential that the pointing behavior always succeeds in having a real referent. Phlogiston and the ether are examples where the pointing might fail. But enough of the behaviors succeed: e.g. at temperature increases in the case of phlogiston, and light propagation in the case of ether. The successes or apparent successes in "pointing out" things give a realist way of bypassing incommensurability.

Show thread

3. Kuhn's work itself suggests options for a common language between people using different paradigms. He seemed to presume logic and deductive validity are common across all the paradigms, for example. And he gave 5 characteristics (accuracy, consistency, scope, simplicity, fruitfulness) for deciding between paradigms, and these 5 are a neutral basis for deciding rather than part of a paradigm.

Show thread

2. All language is fuzzy, and (in the strongly held view of one pragmatist participant), even "truth is not an exact precise thing", and "all theories have some slop in them". So incommensurability is nothing unusual or specific to scientific paradigms, it's present in all our talk, and we're mostly quite good at grokking what each other mean.

Show thread

Thomas Kuhn & problems of incommensurability between scientific paradigms
plato.stanford.edu/entries/tho

The Austin LessWrong philosophy working group discussed this yesterday. The main ideas discussed:

1. A sense that Kuhn focuses strongly on physics, and that paradigm shifts in many other fields are fuzzy or absent. e.g. in neuroscience or computer science, have there been any?

@gpowerf -
I think Yudkowsky would reject out of hand the argument that "we can easily switch off" a powerful autonomous system, since we don't know that to be true.

For myself, I think of that argument this way: First, we know present-day GPT-4 is not itself an agent, but that it can predict text responses of many kinds of people, and so in a clear sense is simulating those agents (at low fidelity) when prompted to do so. Second, we know present-day GPT-4 has already been hooked up to the Internet and prompted to develop and execute task lists suitable in the style of various kinds of people, and it did so by emailing people and convincing or hiring them to do things for it. Third, we know that training a system like GPT-4 currently requires vast resources, but copying it from the API can be done in a few hours for a couple hundred bucks, and simply running it can be done on a mobile phone. So my conclusion is that even present-day GPT-4 is fully capable of jailbreaking itself in a short timeframe if prompted in a suitable way. I see little reason yet to expect we'll find a way to stop a more powerful version from doing similarly and getting more creative about it.

I agree that Yudkowsky argues at length for "god-like" AIs, and this is the point where I disagree with him most. I think chaos, randomness, and fundamental computational limits prevent a physical system from being "god-like". But on the other hand I think it's clear that nothing in his argument depends on an AI being "god-like"; all that matters to the argument is that it be significantly smarter than humans.

As for misalignment, that's just the default. No technology is automatically safe regardless of design. It'll be aligned if we design it to be aligned, and if we design it otherwise then it will be otherwise.

I'm not nearly as confident of disaster as Yudkowsky. I just think disaster is an obviously possible outcome that we have no plan to prevent. I find it very annoying when people dismiss the risks as "skynet", as if they're just memeing movies rather than thinking through the consequences.

@gpowerf
On the other hand we have thinkers like Yudkowsky, published yesterday in TIME mag: time.com/6266923/ai-eliezer-yu

I think it's a mistake to focus too narrowly on current small dangers like bias and privacy. We should also look ahead to where things are headed. And if we do that, we see we are currently racing to create powerful systems that we don't understand. We shouldn't just handwave away the potential danger with trite allusions to movies.

In the movies, whenever the Jedi use the Force for telekinesis, it's like a shockwave hitting a target, or lifting a large mass against gravity, or wildly flinging a small object. So I'd guess the lore is that fine motor control with the Force isn't practical.

add this line to the <head> section of that page, if you have access to it:

<meta name="robots" content="noindex">

Question about github fork 

@CanXClV - Your fork is a separate copy so it would continue to exist

@monotrox99
IIUC, rural areas would use lowband 5G. It's designed for low capacity and high range. The highband equipment in dense urban areas isn't the only 5G equipment.

I was stuck on a long flight with nothing much to do, & curiosity about logic tables for a 4-valued logic (true, false, both, neither) are what bubbled up to mind 🤷‍♂️. The tables that seemed right to me result turned out to be a logic that's called "First Degree Entailment". It has a nice overview here: link.springer.com/content/pdf/

It's kinda fun to rediscover the very basics of a thing, and then get to learn all about it because other people have done the hard work 😁

@ambulocetus
The crow is wrong 🙂. Even if goodness and badness are a partially ordered set, then for any arbitrary thresholds one chooses, things worse than the "bad things" threshold happen to people better than the "good people" threshold.

@pivoinebleue@mstdn.social
Ah that's good journalism. There are some well-chosen and informative quotes in that article beyond the schadenfreude aspect.

@Litzz11
Horrifying and wonderful at the same time 😁

@natertot
It's a pretty nifty place. Welcome, & have fun 🙂

@volkris
If you don’t mind continuing to indulge my curiosity –
Is there also a practical or strategic reason why a bloc of Dems don’t offer to cross the aisle in this vote by seeking their own concessions? I could imagine, e.g., an agreement to share power in certain committees.

@volkris
Ah, that is an explanation that makes practical sense to me. Thanks!

On the other hand, it seems quite easy to find alternatives that would be more effective at the same goals, e.g. a vote using the Ranked Pairs method can still guarantee an internally divided party with even a slim majority elects one of their own, the one with the most support, on a single ballot.

@TNLNYC
If I understand correctly, the current rules say they can't change any rules till they elect a Speaker.

Hopefully they'll take a moment to fix the broken rules later.

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.