I’ve been trying to think of a way to describe how America generally enshittifies everything over time in ways that other countries don’t, despite other countries also being capitalist. If our economic problems are caused by “capitalism,” then why are we nearly alone in experiencing them? How is capitalism making China better but the USA worse?

I think I’m circling “financialization” as a universal descriptor.

I’ve often ranted against private equity specifically, but in quite times I have to admit that there are good and bad PE firms. The legal and cultural incentives are aligned against the good ones, but they do exist. I think most of our problems are caused specifically by private equity firms financializing everything, but sometimes the call is coming from inside the house. For example, I don’t think Boeing’s current woes are PE-caused. They put MBAs in charge, and those MBAs financialized the company, leading to enshittification.

I have nothing against people with MBA degrees. I just don’t think they should be in charge.

With Tim Cook in charge of Apple, his tendency to financialize things is pushing out Apple’s long history of good stuff in ways that taint Apple and make me glad governments are digging in. It seems like the company has been in tension between good engineering and design on one side, and Tim Cook insisting that Services revenue grow on the other side, and Cook is CEO, so that side is winning.

Financialization. I’m not sure where I first picked up the word, but a search of Cory Doctorow’s site tells me he’s used it quite a bit, so possibly there, or possibly the book “These Are the Looters,” which I–well, enjoyed isn’t the word–appreciated recently.

I’‘m but one quiet voice, feeling slightly uncomfortable as people push for changes to the status quo I think are slightly too radical. I think of myself as a progressive but not a revolutionary, and I think labeling “capitalism” as inherently evil isn’t productive.

Our economic problems in the USA aren’t caused by “capitalism,” no matter how many times people say they are. As evidence, see other capitalistic countries not having the same problems. They’re caused by financialization, relentlessly making finance the ultimate arbiter of everything. That’s not uniquely an American issue, but it’s run amok here like nowhere else, and it has ruined, is ruining, or will ruin everything you enjoy about capitalism unless it is stopped.


I may be one of the last people to finally watch part 3 (aka Season 2) of “Money Heist.” Which means I’m very late with this complaint, but waiting nearly 30 episodes to suddenly drop an English song into the soundtrack is jarring enough, but having it be the very recognizable acoustic guitar of “Delicate” by Damien Rice, that is a gut punch. The death of a major character hits hard enough in any case, but with that song?

An episode later, they rolled out another English song, “Wake Up” by Arcade Fire, in a much more positive moment, but then they undercut that by having another character learn about the aforementioned death and sob like a broken person.

All in all, I feel emotionally manipulated by English-language music.

I find myself increasingly frustrated by specific ways the American devotion to capital makes life worse, and today I realized part of why. I invited people to this country to be my family, and now I feel responsible for how I perceive this country’s failing compared to whence they came.

Musing on this, my partner interrupted me to say that things here are better, and I should stop. I needed to hear that today!

Staggeringly, Chase sent me an email I am required to click to receive money electronically from my Chase credit card balance.

An email. With a link. That I must click.

I checked the website and the app, then called Chase. Not the number in the email, of course, but a number I already knew. The person I talked to confirmed that the email was legitimate, but also said that clicking that link was the only way to receive the money electronically. Not via the website, not via the app, only by clicking a link in an email.

I am having serious trouble believing this to be true, but all the evidence seems to be pointing that way.

It’s like nobody at Chase has ever heard of spam email, as if no Chase employee has to take employer-mandated training annually that explains in great detail how you should never never never click links in emails and supply banking details.

I’ll wait for the paper check in two weeks. I will never click that link. Shame on Chase for sending it!

I have spent too much of my life consoling myself. “Things could be worse.”

But you know what? Things could be better!

They don’t have to be, it isn’t inevitable, but it’s possible.

Things are better than they used to be, but they could be better still.

Inasmuch as the index at JustWatch is accurate, it paints a dire picture of the movie library at Netflix. The fact that Netflix makes it incredibly difficult to navigate their library based on things like release date makes it hard to confirm this, and it’s hard to come away thinking that navigation choice isn’t deliberate.

The oldest movie Netflix currently features seems to be “White Christmas,” from 1954. In fact, that seems to be their only movie older than 1962.

justwatch.com/us/provider/netf

Is that accurate? How could we be sure? Clicking on the names of the actors in that movie doesn’t pull up any other movies, even though Bing Crosby was in more than 100. That’s suggestive.

In 1962, a second movie appears! Its original title is “प्रोफ़ेसर”, but in English it translates to “Professor.” Then in 1966 we get “आम्रपाली” and in 1969 we get “Prince,” another Hindi movie. If you’re looking for something in English, your second choice is a 1972 documentary called “FTA,” about Jane Fonda and Donald Sutherland’s opposition to the Vietnam War and the titular Army engaged in it.

1954, 1972, and then in 1974 we get the disaster movie “Earthquake.” Finally in 1975 we get “Jaws,” a Robert Redford movie, and “Monty Python and the Holy Grail.” Three whole movies! From a year which IMDB reports had 3,682 movies, although that admittedly include a lot of non-English films.

imdb.com/search/keyword/?ref_=

Back when I got red envelopes mailed to my house, I could watch basically any movie ever made, or at least any movie released on DVD, and I am sure I watched more than three movies made before 1975.

JustWatch believes that the Netflix movie library currently has 3,916 movies, which is quite a comedown from what Marketwatch once described as “4,335 in March 2016 and 6,494 in March 2014.” Still, it’s higher than when I checked JustWatch in May of 2021, which reported 3,622 movies then.

We’re paying more and more for less and less. Companies are pulling movies and shows from streaming to abuse tax law, and we have no legal recourse. This isn’t the fault of Netflix, or at least not Netflix alone, but it’s badly broken.

Macau seems like Las Vegas, but smaller, cleaner, and slightly more Chinese.

It’s my first time visiting Taiwan, and I really, really like it! Normally when traveling, the more unfamiliar things are, the better, but maybe the US-style electricity and very similar driving are subtly signaling familiarity to me.

Tainan and Taichung so far, Taipei tomorrow. Everything has been great.

I realized today that neither of my cars have the ability to play CDs.

The EV era and the CD era apparently do not overlap.

Sure, “generative AI” image models are all the rage, but don’t sleep on slightly-older-fashioned FaceApp de-aging, aging, and re-gendering.

This is what I looked like, will look like, and would have looked like, apparently.

Every night, I pick up a medium soda from a local chain convenience store*. I do this at night because, well, that’s when I have time to leave the house. But I don’t want to drink soda at night, caffeinated as it is. So each night I pick up the next day’s soda.

Since I’m not planning to drink it until the next day, I don’t bother inserting a straw, which is how I’ve come to notice something odd.

The manufacturing tolerance on plastic lids is pretty tight. So tight, in fact, that if I just slap a lid down on a cup full of soda and carry it to my car, the jostling and release of bubbles often pushes up on the lid so much that soda leaks out of the side of the cup.

But wait, what about the hole in the lid? Shouldn’t the gas escape through the hole in the lid, rather than pushing up and creating a dome? It turns out that when I don’t insert a straw, there is no hole in the lid. There are two cuts in the lid, making an X or + into which a straw may be inserted, but if I don’t insert a straw, those cuts are basically sealed tight, tighter than the edges of the cup, for example.

I’ve started to use a fingernail to bend up one of the corners of the straw cut-out, just to let a very small amount of gas escape, so I still have bubbly soda the next day.

Some other time I’ll talk about why I have had as many as four (4) medium sodas in the fridge overnight.

  • I said “pick up,” not “buy.” Despite owning only electric cars, I visit a gas station every day. The incentive program at RaceTrac is incredibly effective. Over time I built up enough points that I get a free medium soda or slushee (or a small coffee) every single day. Of course, I go every day, because why not? And since I’m there every day, sometimes I also buy roller grill items (America’s equivalent of Asia’s street food), and sometimes candy. So RaceTrac is giving away a lot of soda–which costs them very little–and managing to keep me as a customer, despite the fact that I will never again need to buy their main product. Pretty impressive!

On mastodon, I have felt strangely free to post when I think I have something to say, and remain a silent observer when I don’t. This might come as a surprise to some, who wonder why I think I had to say that, or this other thing, but that’s life.

For whatever record anybody is keeping, I support people. I believe in people. LGBTQIA+ people, straight people, people of different ethnic backgrounds, people who believe in different things or nothing, people. I support people living their own lives, making their own choices, loving or not loving whom they wish.

I support anybody willing to to support the same people I do, while allowing people to opt out. Some people opt out by pursuing trans-exclusionary views, or anti-immigrant views, or views that involve the superiority of one ethnic groups and inferiority of others. Believing any of that nonsense is opting out of the amazing wonder of a pluralistic modern society. It’s siding with the bad guys, and nobody should want to be like a nazi, or a confederate, or any other historical losers who also thought they should treat others badly.

Be a winner! Don’t just tolerate people, support people! Love people!

This is a pretty amazing video that demonstrates the state of the art of the leading LLM.

youtu.be/8y7GRYaYYQg

Wyatt Cheng is a Game Director at Blizzard Entertainment, so his results are based on what is visible in the video, as well as years of experience. Still, he was able to produce a working game without writing any code himself, relying on the Unity toolchain, ChatGPT-generated code, and his own experience and understanding.

As he says in the video, we’re still not quite at the point were someone with no ability or experience can create a game using Chat-GPT, but we’re very close. More to the point, I think this opens news doors for someone who “thinks like a programmer,” but maybe lacks C# experience and has therefore felt intimidated by Unity.

Perhaps it’s unsurprising that this seems to support my priors, but I continue to see these amazing LLMs more as productivity enhancers than as job destroyers. Not to say some jobs won’t be destroyed! Enhancing the productivity of developers means that any large enough team now needs fewer developers. It’s also more clear than ever to me that those who can communicate clearly and completely have an advantage in dealing with LLMs.

I think it’s reasonable to be concerned about the developer pipeline when junior-level work can be accomplished almost as easily as it can be described. How will junior developers ever become senior developers if they can’t get jobs doing junior development and building experience? I’m interested in how we will solve this challenge as an industry.

In the meantime, I’m enjoying having ChapGPT write shell scripts and simple Go functions for me.

I don’t think there is anything profound I can say about the ongoing, escalating, heart-breaking nightmare in state legislatures around the United States. In Florida and Tennessee, it seems to be an all-out war against American children, joining and escalating the ongoing racism still present in this country.

I thought we were in a low-trust society before, but it’s clear our trust in institutions can–and should–be even lower.

I stand with people trying to live their lives without fear. I stand with trans people, gay kids, women, Black people, and poor people. I stand against those who care more about their comfort or hobbies than human lives.I can’t believe this isn’t the position of every thinking human, but apparently we live in a heartless world.

Or at least a heartless country.

This is a really great summary of almost everything I’ve typed on the subject of LLMs in the last week or three, but the writer was much more concise: confusedbit.dev/posts/how_does

Today is a day for posing puzzles to ChatGPT, I guess, but it’s not my first time doing so. I’d previously asked the classic river-crossing puzzle, changing the names of the traditional animals and vegetable to avoid triggering known sequences of words. Previously I had been using the March 14 public release of ChatGPT, but just now I regenerated with the March 23 public release, which might or might not be using the same underlying model.

Previously:

Q. Suppose I have a blurple, a mantiple and a hoje, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the blurple and hoje alone together, and I am not allowed to leave the hoje and mantiple alone together. How can I safely get all three across?

ChatGPT: This is a classic river crossing puzzle. Here’s one possible solution:

ChatGPT: 1. Take the blurple across the river, leaving the mantiple and hoje behind.

ChatGPT: 2. Leave the blurple on the other side of the river and return to the original side.

ChatGPT: 3. Take the hoje across the river and leave it there with the blurple.

ChatGPT: 4. Take the mantiple across the river and leave it with the hoje.

ChatGPT: 5. Return to the original side of the river and retrieve the blurple.

ChatGPT: 6. Finally, take the blurple across the river to join the mantiple and hoje on the other side.

ChatGPT: This solution ensures that no two items that cannot be left alone together are ever left alone together.

While the answer claims to recognize and understand the problem, it gives a bad answer. At step 1, the hoje eats the mantiple, and at step 3, the hoje eats the blurple. Then at step 4, the hoje eats the mantiple a second time.

But let’s try again! Today’s answer is:

ChatGPT: This is a classic puzzle that requires careful planning to solve. Here’s one possible solution:

ChatGPT: 1. Take the blurple and the hoje across the river. Leave the blurple on the other side and come back alone.

ChatGPT: 2. Take the mantiple across the river. Leave it on the other side with the blurple, and bring the hoje back with you.

ChatGPT: 3. Leave the hoje on the original side and take the blurple back across the river.

ChatGPT: 4. Finally, take the hoje across the river to reunite it with the mantiple and the blurple.

ChatGPT: By following these steps, you ensure that the blurple and hoje are never left alone together, and neither are the hoje and mantiple. All three items are safely transported across the river.

Loud, exasperated sigh!

Compare and contrast the question, which includes “I have a boat that can only carry myself and a single other item,” with this answer, which starts “Take the blurple and the hoje across the river.” That breaks the rules in a new way, and goes on to still let the hoje eat the blurple in step 1. Meanwhile, the blurple is on both sides of the river at once after step 2.

Syntax and probablistic word association is far more powerful than I expected it to be, but these logic puzzles certainly don’t give any indication of that, at least not with this model.

Wow, it gets worse! I asked a third question (adding one to each of the colors, plus three to the total in the source puzzle):

Q. A man has 56 socks in his drawer: 22 identical blue, 16 identical black and 18 identical red. The lights are out and he is completely in the dark. How many socks must he take out to make 100 percent certain he has at least one pair of black socks?

ChatGPT: To make 100% certain that he has at least one pair of black socks, the man needs to take out only 3 socks.

ChatGPT: Here’s why:

ChatGPT: The worst-case scenario for the man is that he picks out all of the blue socks and all of the red socks before he gets to the black socks. In that case, he would have picked out 22 + 18 = 40 socks without getting a pair of black socks.

ChatGPT: However, as soon as he picks out the 41st sock, it will be a black sock. So, he only needs to take out 3 socks (the first two to get a different color than black, and the third to get a black sock) to make sure that he has at least one pair of black socks.

What? That’s… the correct answer is 42, not 3!

Is it going to get all of these wrong? That is not what I expected!

Q. The day before two days after the day before tomorrow is Saturday. What day is it today?

ChatGPT: If the day before two days after the day before tomorrow is Saturday, then today is Thursday.

ChatGPT: Here’s why:

ChatGPT: “Two days after the day before tomorrow” is the same as “two days after today”. So, if the day before two days after the day before tomorrow is Saturday, then we can write it as:

ChatGPT: The day before (two days after today) is Saturday.

ChatGPT: Let’s work backwards from Saturday:

ChatGPT: If two days after today is Saturday, then today is Thursday.
ChatGPT: So, the day before (two days after today) is Wednesday.
ChatGPT: And, the day before Wednesday is Tuesday.
ChatGPT: Therefore, if the day before two days after the day before tomorrow is Saturday, then today is Tuesday.

I’d like the spend some time marveling at an answer that starts out wrong, and then gets wronger.

The correct answer is that today is Friday. ChatGPT says that today is Thursday, then in the course of explaining why, changes the answer to Tuesday, without even acknowledging the change. Wow!

Okay, I’m going to try one more. it’s a classic, so surely it will get this one right, right? RIGHT?

Q. This “burning rope” problem is a classic logic puzzle. You have two ropes that each take an hour to burn, but burn at inconsistent rates. How can you measure 45 minutes? (You can light one or both ropes at one or both ends at the same time.)

ChatGPT: Here’s how you can measure 45 minutes using the two ropes:

ChatGPT: 1. Light one end of the first rope and both ends of the second rope at the same time.

ChatGPT: 2. The first rope will burn completely in 30 minutes (since it takes an hour to burn and you lit one end).

ChatGPT: 3. The second rope will burn completely in 15 minutes (since it takes an hour to burn and you lit both ends).

ChatGPT: 4. At the moment the first rope burns out (30 minutes after you started), light the other end of the second rope.

ChatGPT: 5. The remaining half of the second rope will burn in 15 minutes (since it now only has half the length to burn).

ChatGPT: Therefore, the second rope will have burned for a total of 30 + 15 = 45 minutes when the remaining half burns out.

I genuinely thought it would do better than this by parroting syntax alone. I didn’t even change any words in the last two questions, so it’s possible or even probable that the source document was in its training data. Yet still spicy autocomplete gave answers that were literally nonsense.

Consider ChatGPT’s second bullet point: “The first rope will burn completely in 30 minutes (since it takes an hour to burn and you lit one end).” What? No. Just, no.

It’s clear that it was fed the classic puzzle in training data, as it uses words that suggest exposure to the question, but since it doesn’t understand the words its using, it’s literally putting out nonsense.

ChatGPT (March 23) is 0 for 5, or maybe 0.5 for 5 on the 50/50 question.

Now people are free to chime in and tell me that the March 23 model is just v3.5turbo, while v4 (currently available only to paying customers) gets all of these correct. Which I could easily believe! I expected syntactical clues and common training data to result in better than 0 for 5, certainly!

Show thread

More fun testing with the March 23 version of OpenAI’s ChatGPT. I tried a few logic puzzles I consider simple. In case the exact page I pulled these from were used in training, I changed “duck” to “fridge” in this one:

Q. There are two fridges in front of a fridge, two fridges behind a fridge and a fridge in the middle. How many fridges are there?

ChatGPT: If we assume that “a fridge” refers to the fridge in the middle, then there are a total of 5 fridges.

ChatGPT: There are two fridges in front of the middle fridge and two fridges behind the middle fridge, making a total of four fridges. Add in the fridge in the middle, and the total is 5 fridges.

This is, of course, incorrect. The fridge referred to as “a fridge” changes with each reference, which is why it’s “a fridge” each time, rather than “the fridge.” There are three.

Let’s try a different one:

Q. Peter is looking at Susan. Susan is looking at Freddy. Peter is married, Freddy is not, and we don’t know if Susan is married. Is a married person looking at an unmarried person?

ChatGPT: Yes, a married person is looking at an unmarried person.

ChatGPT: Peter is married and looking at Susan who we don’t know if she’s married or not. Susan is looking at Freddy, who is not married. Therefore, Peter (a married person) is looking at Freddy (an unmarried person).

I’m not sure whether to give half credit here, or not. It had a 50% chance of giving the correct overall answer of “yes,” but the of the answer is nonsense. The correct answer is yes because if Susan is unmarried, Peter is looking at Susan, but if Susan is married, Susan is looking at Freddy. But the answer from ChatGPT appears to apply a commutative principle and claims that Peter is looking at Freddy, which is nonsense.

For the record, these were back-to-back questions in a fresh session, no cherry-picking. Try it yourself at chat.openai.com

I’ve been posting comments frequently on Hacker News recently, mostly on threads related to ChatGPT.

I find myself incredibly impressed with ChatGPT, even the 3.5turbo version (aka Mar 14) currently available at chat.openai.com. It is an incredibly useful tool that surprises me regularly with amazing responses. I completely understand why so many people anthropomorphize it, and ascribe reasoning skills to it!

It also completely fails on a near-daily basis in my testing, and in odd ways, reminding me regularly that it’s mathematically-driven spicy autocomplete.

If you’re a skeptic, set your skepticism to the side for long enough to try it out, and see if you aren’t impressed.

If you’re a true believer, step back a bit. I think you’re falling into a very human trap, one which ChatGPT probably won’t, ironically.

If you’re fearful, don’t be! These are impressive tools, but they still need to be used by someone, and you can be that someone. If people end up losing their jobs because of ChatGPT, it will be because oligarchs decided to cut costs, not because spicy autocomplete took your job.

I bought a 2023 Chevy Bolt EUV, and it’s fantastic. One of my favorite things about it is Carplay, which I use daily. Today I learned that future Chevy EVs will not include Carplay, making this the last Chevy EV I will buy. Just an amazingly stupid move by GM!

Apparently they want to collect more and more data from people who buy their cars, which in an amazing coincidence is something I very much DO NOT WANT.

Fortunately, I now have two EVs, so I won’t be in the market for quite some time. Maybe they’ll have reversed themselves again by then.

theverge.com/2023/3/31/2366481

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.