Show newer

I've been reading the 2016 sci-fi book "Too Like the Lightning", by Ada Palmer -- and loving it so much! Honestly I was hooked by the *title page*, because it just jumped right into the world building with gusto. (The pic is a bit blurry, sorry. But take a look anyway!)

In the 10th century, Persian traveler Buzurg ibn Shahriyar wrote in his book about a jinn market located in Kashmir.

According to local informants the jinn marketplace was located in luscious gardens among running streams. The jinn could be heard around the gardens buying and selling, but no one ever saw them.

Sadly he doesn't record more than that. Even though, it sounds like a fascinating setting for a story. 🧞 🧞‍♀️

#FolktaleMoment #histodon #folklore #mythology #WyrdWednesday #storytelling

Democrats already support these things, and the GOP opposes them. So it's unclear what a third party could meaningfully add.

I like the half-bracket notation for sub-claims. Particularly if the sub-claims are separated by connectives like "and”, "or", or commas, I expect it's fine to leave out the delimiters entirely. Nested claims would still require delimiters, but it's clearly better to just not make nested claims.

For multiple subjects and a common claim, subscripting each subject is indeed awkward but I can't think of anything better. It's not hard to guess the intended meaning though so it should be fine.

I've been playing with the Hat aperiodic monotile and I've found a simple decoration that produces nice patterns.

You can download the corresponding 3D printing files here: printables.com/model/448090-ap

Next paper for the Austin LessWrong philosophy working group: "The Virtue of Subtlety and the Vice of a Heavy Hand" by Alex King: philosopher-king.com/king_subt

The whole branch of aesthetics is new to me, except for arguments defining art or beauty. I wanted something that felt different, something that focused our attention on the qualities of particular artworks and makes us think about them. This paper does so in a way I find accessible but exciting 🙂.

Impressive hit piece. A good opportunity for a pass-out drinking game of finding the smears-by-implication (i.e. where the text does its damnedest to leave readers with a nasty false impression, but without literally lying).

Ugh, somebody let porn into the federated timeline on QOTO. 😠 There's no way to get rid of it without blocking the user or domain each time it pops up, and that's not very practical. I guess this is where an ML-curated timeline like Twitter's can shine, simply not showing undesired content of this kind.

@ceoln
It should be interpreted as "it's true of scientists and it's especially true of engineers". There's no intended implication that they're the same thing.

5. The pragmatist strongly emphasized that science is supposed to be better than scientists. Particular scientists might be partisans of their theories, but in a field of diverse partisans, the ones with better theories will tend to be more fruitful. Enough new researchers will prefer to go into more fruitful areas that, over time, even a field of all partisans should converge on better theories. (Insert adage about science progressing one funeral at a time.)

I wondered if convergence is sufficient by itself to overcome incommensurability. Paradigms in philosophy, religion, & politics are harder to overcome than in science; what if two people who disagree simply followed this process?:

• They pick a topic and get a big piece of paper to write on.
• They take turns proposing interesting statements they think the other person might agree with.
• If they both agree, they write the statement down.
• They see what kind of consensus they can put together as they fill up the page.

Show thread

4. Scientists and especially engineers are typically realists about what they're working with. They might technically have incommensurable definitions of "mass", for example, but they get past it essentially by saying, "Let me show you what I mean", and getting out some materials and equipment and pointing at something they do.

It's not essential that the pointing behavior always succeeds in having a real referent. Phlogiston and the ether are examples where the pointing might fail. But enough of the behaviors succeed: e.g. at temperature increases in the case of phlogiston, and light propagation in the case of ether. The successes or apparent successes in "pointing out" things give a realist way of bypassing incommensurability.

Show thread

3. Kuhn's work itself suggests options for a common language between people using different paradigms. He seemed to presume logic and deductive validity are common across all the paradigms, for example. And he gave 5 characteristics (accuracy, consistency, scope, simplicity, fruitfulness) for deciding between paradigms, and these 5 are a neutral basis for deciding rather than part of a paradigm.

Show thread

2. All language is fuzzy, and (in the strongly held view of one pragmatist participant), even "truth is not an exact precise thing", and "all theories have some slop in them". So incommensurability is nothing unusual or specific to scientific paradigms, it's present in all our talk, and we're mostly quite good at grokking what each other mean.

Show thread

Thomas Kuhn & problems of incommensurability between scientific paradigms
plato.stanford.edu/entries/tho

The Austin LessWrong philosophy working group discussed this yesterday. The main ideas discussed:

1. A sense that Kuhn focuses strongly on physics, and that paradigm shifts in many other fields are fuzzy or absent. e.g. in neuroscience or computer science, have there been any?

@gpowerf -
I think Yudkowsky would reject out of hand the argument that "we can easily switch off" a powerful autonomous system, since we don't know that to be true.

For myself, I think of that argument this way: First, we know present-day GPT-4 is not itself an agent, but that it can predict text responses of many kinds of people, and so in a clear sense is simulating those agents (at low fidelity) when prompted to do so. Second, we know present-day GPT-4 has already been hooked up to the Internet and prompted to develop and execute task lists suitable in the style of various kinds of people, and it did so by emailing people and convincing or hiring them to do things for it. Third, we know that training a system like GPT-4 currently requires vast resources, but copying it from the API can be done in a few hours for a couple hundred bucks, and simply running it can be done on a mobile phone. So my conclusion is that even present-day GPT-4 is fully capable of jailbreaking itself in a short timeframe if prompted in a suitable way. I see little reason yet to expect we'll find a way to stop a more powerful version from doing similarly and getting more creative about it.

I agree that Yudkowsky argues at length for "god-like" AIs, and this is the point where I disagree with him most. I think chaos, randomness, and fundamental computational limits prevent a physical system from being "god-like". But on the other hand I think it's clear that nothing in his argument depends on an AI being "god-like"; all that matters to the argument is that it be significantly smarter than humans.

As for misalignment, that's just the default. No technology is automatically safe regardless of design. It'll be aligned if we design it to be aligned, and if we design it otherwise then it will be otherwise.

I'm not nearly as confident of disaster as Yudkowsky. I just think disaster is an obviously possible outcome that we have no plan to prevent. I find it very annoying when people dismiss the risks as "skynet", as if they're just memeing movies rather than thinking through the consequences.

@gpowerf
On the other hand we have thinkers like Yudkowsky, published yesterday in TIME mag: time.com/6266923/ai-eliezer-yu

I think it's a mistake to focus too narrowly on current small dangers like bias and privacy. We should also look ahead to where things are headed. And if we do that, we see we are currently racing to create powerful systems that we don't understand. We shouldn't just handwave away the potential danger with trite allusions to movies.

In the movies, whenever the Jedi use the Force for telekinesis, it's like a shockwave hitting a target, or lifting a large mass against gravity, or wildly flinging a small object. So I'd guess the lore is that fine motor control with the Force isn't practical.

add this line to the <head> section of that page, if you have access to it:

<meta name="robots" content="noindex">

Question about github fork 

@CanXClV - Your fork is a separate copy so it would continue to exist

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.