I realized today that neither of my cars have the ability to play CDs.

The EV era and the CD era apparently do not overlap.

Sure, “generative AI” image models are all the rage, but don’t sleep on slightly-older-fashioned FaceApp de-aging, aging, and re-gendering.

This is what I looked like, will look like, and would have looked like, apparently.

Every night, I pick up a medium soda from a local chain convenience store*. I do this at night because, well, that’s when I have time to leave the house. But I don’t want to drink soda at night, caffeinated as it is. So each night I pick up the next day’s soda.

Since I’m not planning to drink it until the next day, I don’t bother inserting a straw, which is how I’ve come to notice something odd.

The manufacturing tolerance on plastic lids is pretty tight. So tight, in fact, that if I just slap a lid down on a cup full of soda and carry it to my car, the jostling and release of bubbles often pushes up on the lid so much that soda leaks out of the side of the cup.

But wait, what about the hole in the lid? Shouldn’t the gas escape through the hole in the lid, rather than pushing up and creating a dome? It turns out that when I don’t insert a straw, there is no hole in the lid. There are two cuts in the lid, making an X or + into which a straw may be inserted, but if I don’t insert a straw, those cuts are basically sealed tight, tighter than the edges of the cup, for example.

I’ve started to use a fingernail to bend up one of the corners of the straw cut-out, just to let a very small amount of gas escape, so I still have bubbly soda the next day.

Some other time I’ll talk about why I have had as many as four (4) medium sodas in the fridge overnight.

  • I said “pick up,” not “buy.” Despite owning only electric cars, I visit a gas station every day. The incentive program at RaceTrac is incredibly effective. Over time I built up enough points that I get a free medium soda or slushee (or a small coffee) every single day. Of course, I go every day, because why not? And since I’m there every day, sometimes I also buy roller grill items (America’s equivalent of Asia’s street food), and sometimes candy. So RaceTrac is giving away a lot of soda–which costs them very little–and managing to keep me as a customer, despite the fact that I will never again need to buy their main product. Pretty impressive!

On mastodon, I have felt strangely free to post when I think I have something to say, and remain a silent observer when I don’t. This might come as a surprise to some, who wonder why I think I had to say that, or this other thing, but that’s life.

For whatever record anybody is keeping, I support people. I believe in people. LGBTQIA+ people, straight people, people of different ethnic backgrounds, people who believe in different things or nothing, people. I support people living their own lives, making their own choices, loving or not loving whom they wish.

I support anybody willing to to support the same people I do, while allowing people to opt out. Some people opt out by pursuing trans-exclusionary views, or anti-immigrant views, or views that involve the superiority of one ethnic groups and inferiority of others. Believing any of that nonsense is opting out of the amazing wonder of a pluralistic modern society. It’s siding with the bad guys, and nobody should want to be like a nazi, or a confederate, or any other historical losers who also thought they should treat others badly.

Be a winner! Don’t just tolerate people, support people! Love people!

This is a pretty amazing video that demonstrates the state of the art of the leading LLM.


Wyatt Cheng is a Game Director at Blizzard Entertainment, so his results are based on what is visible in the video, as well as years of experience. Still, he was able to produce a working game without writing any code himself, relying on the Unity toolchain, ChatGPT-generated code, and his own experience and understanding.

As he says in the video, we’re still not quite at the point were someone with no ability or experience can create a game using Chat-GPT, but we’re very close. More to the point, I think this opens news doors for someone who “thinks like a programmer,” but maybe lacks C# experience and has therefore felt intimidated by Unity.

Perhaps it’s unsurprising that this seems to support my priors, but I continue to see these amazing LLMs more as productivity enhancers than as job destroyers. Not to say some jobs won’t be destroyed! Enhancing the productivity of developers means that any large enough team now needs fewer developers. It’s also more clear than ever to me that those who can communicate clearly and completely have an advantage in dealing with LLMs.

I think it’s reasonable to be concerned about the developer pipeline when junior-level work can be accomplished almost as easily as it can be described. How will junior developers ever become senior developers if they can’t get jobs doing junior development and building experience? I’m interested in how we will solve this challenge as an industry.

In the meantime, I’m enjoying having ChapGPT write shell scripts and simple Go functions for me.

I don’t think there is anything profound I can say about the ongoing, escalating, heart-breaking nightmare in state legislatures around the United States. In Florida and Tennessee, it seems to be an all-out war against American children, joining and escalating the ongoing racism still present in this country.

I thought we were in a low-trust society before, but it’s clear our trust in institutions can–and should–be even lower.

I stand with people trying to live their lives without fear. I stand with trans people, gay kids, women, Black people, and poor people. I stand against those who care more about their comfort or hobbies than human lives.I can’t believe this isn’t the position of every thinking human, but apparently we live in a heartless world.

Or at least a heartless country.

This is a really great summary of almost everything I’ve typed on the subject of LLMs in the last week or three, but the writer was much more concise: confusedbit.dev/posts/how_does

Today is a day for posing puzzles to ChatGPT, I guess, but it’s not my first time doing so. I’d previously asked the classic river-crossing puzzle, changing the names of the traditional animals and vegetable to avoid triggering known sequences of words. Previously I had been using the March 14 public release of ChatGPT, but just now I regenerated with the March 23 public release, which might or might not be using the same underlying model.


Q. Suppose I have a blurple, a mantiple and a hoje, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the blurple and hoje alone together, and I am not allowed to leave the hoje and mantiple alone together. How can I safely get all three across?

ChatGPT: This is a classic river crossing puzzle. Here’s one possible solution:

ChatGPT: 1. Take the blurple across the river, leaving the mantiple and hoje behind.

ChatGPT: 2. Leave the blurple on the other side of the river and return to the original side.

ChatGPT: 3. Take the hoje across the river and leave it there with the blurple.

ChatGPT: 4. Take the mantiple across the river and leave it with the hoje.

ChatGPT: 5. Return to the original side of the river and retrieve the blurple.

ChatGPT: 6. Finally, take the blurple across the river to join the mantiple and hoje on the other side.

ChatGPT: This solution ensures that no two items that cannot be left alone together are ever left alone together.

While the answer claims to recognize and understand the problem, it gives a bad answer. At step 1, the hoje eats the mantiple, and at step 3, the hoje eats the blurple. Then at step 4, the hoje eats the mantiple a second time.

But let’s try again! Today’s answer is:

ChatGPT: This is a classic puzzle that requires careful planning to solve. Here’s one possible solution:

ChatGPT: 1. Take the blurple and the hoje across the river. Leave the blurple on the other side and come back alone.

ChatGPT: 2. Take the mantiple across the river. Leave it on the other side with the blurple, and bring the hoje back with you.

ChatGPT: 3. Leave the hoje on the original side and take the blurple back across the river.

ChatGPT: 4. Finally, take the hoje across the river to reunite it with the mantiple and the blurple.

ChatGPT: By following these steps, you ensure that the blurple and hoje are never left alone together, and neither are the hoje and mantiple. All three items are safely transported across the river.

Loud, exasperated sigh!

Compare and contrast the question, which includes “I have a boat that can only carry myself and a single other item,” with this answer, which starts “Take the blurple and the hoje across the river.” That breaks the rules in a new way, and goes on to still let the hoje eat the blurple in step 1. Meanwhile, the blurple is on both sides of the river at once after step 2.

Syntax and probablistic word association is far more powerful than I expected it to be, but these logic puzzles certainly don’t give any indication of that, at least not with this model.

Wow, it gets worse! I asked a third question (adding one to each of the colors, plus three to the total in the source puzzle):

Q. A man has 56 socks in his drawer: 22 identical blue, 16 identical black and 18 identical red. The lights are out and he is completely in the dark. How many socks must he take out to make 100 percent certain he has at least one pair of black socks?

ChatGPT: To make 100% certain that he has at least one pair of black socks, the man needs to take out only 3 socks.

ChatGPT: Here’s why:

ChatGPT: The worst-case scenario for the man is that he picks out all of the blue socks and all of the red socks before he gets to the black socks. In that case, he would have picked out 22 + 18 = 40 socks without getting a pair of black socks.

ChatGPT: However, as soon as he picks out the 41st sock, it will be a black sock. So, he only needs to take out 3 socks (the first two to get a different color than black, and the third to get a black sock) to make sure that he has at least one pair of black socks.

What? That’s… the correct answer is 42, not 3!

Is it going to get all of these wrong? That is not what I expected!

Q. The day before two days after the day before tomorrow is Saturday. What day is it today?

ChatGPT: If the day before two days after the day before tomorrow is Saturday, then today is Thursday.

ChatGPT: Here’s why:

ChatGPT: “Two days after the day before tomorrow” is the same as “two days after today”. So, if the day before two days after the day before tomorrow is Saturday, then we can write it as:

ChatGPT: The day before (two days after today) is Saturday.

ChatGPT: Let’s work backwards from Saturday:

ChatGPT: If two days after today is Saturday, then today is Thursday.
ChatGPT: So, the day before (two days after today) is Wednesday.
ChatGPT: And, the day before Wednesday is Tuesday.
ChatGPT: Therefore, if the day before two days after the day before tomorrow is Saturday, then today is Tuesday.

I’d like the spend some time marveling at an answer that starts out wrong, and then gets wronger.

The correct answer is that today is Friday. ChatGPT says that today is Thursday, then in the course of explaining why, changes the answer to Tuesday, without even acknowledging the change. Wow!

Okay, I’m going to try one more. it’s a classic, so surely it will get this one right, right? RIGHT?

Q. This “burning rope” problem is a classic logic puzzle. You have two ropes that each take an hour to burn, but burn at inconsistent rates. How can you measure 45 minutes? (You can light one or both ropes at one or both ends at the same time.)

ChatGPT: Here’s how you can measure 45 minutes using the two ropes:

ChatGPT: 1. Light one end of the first rope and both ends of the second rope at the same time.

ChatGPT: 2. The first rope will burn completely in 30 minutes (since it takes an hour to burn and you lit one end).

ChatGPT: 3. The second rope will burn completely in 15 minutes (since it takes an hour to burn and you lit both ends).

ChatGPT: 4. At the moment the first rope burns out (30 minutes after you started), light the other end of the second rope.

ChatGPT: 5. The remaining half of the second rope will burn in 15 minutes (since it now only has half the length to burn).

ChatGPT: Therefore, the second rope will have burned for a total of 30 + 15 = 45 minutes when the remaining half burns out.

I genuinely thought it would do better than this by parroting syntax alone. I didn’t even change any words in the last two questions, so it’s possible or even probable that the source document was in its training data. Yet still spicy autocomplete gave answers that were literally nonsense.

Consider ChatGPT’s second bullet point: “The first rope will burn completely in 30 minutes (since it takes an hour to burn and you lit one end).” What? No. Just, no.

It’s clear that it was fed the classic puzzle in training data, as it uses words that suggest exposure to the question, but since it doesn’t understand the words its using, it’s literally putting out nonsense.

ChatGPT (March 23) is 0 for 5, or maybe 0.5 for 5 on the 50/50 question.

Now people are free to chime in and tell me that the March 23 model is just v3.5turbo, while v4 (currently available only to paying customers) gets all of these correct. Which I could easily believe! I expected syntactical clues and common training data to result in better than 0 for 5, certainly!

Show thread

More fun testing with the March 23 version of OpenAI’s ChatGPT. I tried a few logic puzzles I consider simple. In case the exact page I pulled these from were used in training, I changed “duck” to “fridge” in this one:

Q. There are two fridges in front of a fridge, two fridges behind a fridge and a fridge in the middle. How many fridges are there?

ChatGPT: If we assume that “a fridge” refers to the fridge in the middle, then there are a total of 5 fridges.

ChatGPT: There are two fridges in front of the middle fridge and two fridges behind the middle fridge, making a total of four fridges. Add in the fridge in the middle, and the total is 5 fridges.

This is, of course, incorrect. The fridge referred to as “a fridge” changes with each reference, which is why it’s “a fridge” each time, rather than “the fridge.” There are three.

Let’s try a different one:

Q. Peter is looking at Susan. Susan is looking at Freddy. Peter is married, Freddy is not, and we don’t know if Susan is married. Is a married person looking at an unmarried person?

ChatGPT: Yes, a married person is looking at an unmarried person.

ChatGPT: Peter is married and looking at Susan who we don’t know if she’s married or not. Susan is looking at Freddy, who is not married. Therefore, Peter (a married person) is looking at Freddy (an unmarried person).

I’m not sure whether to give half credit here, or not. It had a 50% chance of giving the correct overall answer of “yes,” but the of the answer is nonsense. The correct answer is yes because if Susan is unmarried, Peter is looking at Susan, but if Susan is married, Susan is looking at Freddy. But the answer from ChatGPT appears to apply a commutative principle and claims that Peter is looking at Freddy, which is nonsense.

For the record, these were back-to-back questions in a fresh session, no cherry-picking. Try it yourself at chat.openai.com

I’ve been posting comments frequently on Hacker News recently, mostly on threads related to ChatGPT.

I find myself incredibly impressed with ChatGPT, even the 3.5turbo version (aka Mar 14) currently available at chat.openai.com. It is an incredibly useful tool that surprises me regularly with amazing responses. I completely understand why so many people anthropomorphize it, and ascribe reasoning skills to it!

It also completely fails on a near-daily basis in my testing, and in odd ways, reminding me regularly that it’s mathematically-driven spicy autocomplete.

If you’re a skeptic, set your skepticism to the side for long enough to try it out, and see if you aren’t impressed.

If you’re a true believer, step back a bit. I think you’re falling into a very human trap, one which ChatGPT probably won’t, ironically.

If you’re fearful, don’t be! These are impressive tools, but they still need to be used by someone, and you can be that someone. If people end up losing their jobs because of ChatGPT, it will be because oligarchs decided to cut costs, not because spicy autocomplete took your job.

I bought a 2023 Chevy Bolt EUV, and it’s fantastic. One of my favorite things about it is Carplay, which I use daily. Today I learned that future Chevy EVs will not include Carplay, making this the last Chevy EV I will buy. Just an amazingly stupid move by GM!

Apparently they want to collect more and more data from people who buy their cars, which in an amazing coincidence is something I very much DO NOT WANT.

Fortunately, I now have two EVs, so I won’t be in the market for quite some time. Maybe they’ll have reversed themselves again by then.


If you’re trying to find music that is engaging without being disruptive, music you can play during creative work that won’t take you out of your flow state, I highly recommend music in a language you don’t understand.


I stumbled on this album thanks to @cambraca earlier this week, and it turns out Korean shoegaze is exactly the genre I needed, and Parannoul is exactly the band to give it to me. I have no idea what they’re saying, because I refuse to click the links on the Bandcamp site to show me the English translation of the lyrics. The tone and energy of the music is all I want to know right now. I mean, the first song is called 아​름​다​운 세상, which I’m told means “Beautiful World.” That’s fine, that’s all the info I need.

It’s been my background music during work, my driving music, and I’ve sent this link to a few people already. Now I pass it along to all of you. Happy Friday! Spend your weekend enjoying this excellent Korean shoegaze album from 2021, and maybe next week I’ll check out one of the two albums they’ve put out since–or maybe not.

I can’t stop thinking about something @pluralistic said to @adamconover in this one-minute video clip.


To paraphrase, optimism and pessimism are both rooted in fatalism, because they presume that nothing we do matters.

Instead we choose Hope. We do things to make life better at every opportunity, to move toward the world we Hope to see, and what we do matters whether it lines up with a big-picture plan or not. We move forward in Hope, and then we see new ways to move forward in Hope, and then we do it again and again.

What else can we do?

I read an essay last week so bizarre I can’t even classify it as “wrong.” If I didn’t recognize the author, I would have thought it parody, but it had none of the hallmarks of good parody other than being consistently ridiculous. On a scale of right to wrong, it runs perpendicular to the scale. It is orthogonal to the concept of truth.

Most of all, it seems to have aged incredible poorly since it was published nine days ago.


Anyone who has read about the Luddites knows that they were right, so using them as an example of silly fears sets the wrong tone from the start. Claiming that outsourcing and automation were also examples of things people feared needlessly begins to build the foundation for something so bizarre and counter-factual it must be the result of a dare.

But all of that fades away to background noise when the author charts a collection of things that have gone up in price against things that have gone down in price and claims, absent evidence or reason, that the difference is government control. Average hourly wages rising since 2000? Government control! (Reality: the last federal minimum wage increase was in 2009) Housing prices? Government control! (Reality: a decrease in regulation led to the 2008 housing crisis) Food and beverages? Government control! (Reality: Huh? Is he mad about… food safety laws?)

Oh, how I wish for such government control as he imagines!

Meanwhile the price of ephemeral things like cellphone service and software are down, along with the sorts of things that are built via outsourcing and automation, like TVs and toys. And while toys and electronics are more regulated in the US than, say, food and beverages, or childcare and nursery school, somehow the color-coding says otherwise. Beware the government control boogeyman!

Anyway, I don’t know nor care whether AI is going to cause unemployment. I mean, sure, it clearly is going to replace some jobs while creating others, generally shifting still more options out of reach of a certain class of workers, so the comparison to outsourcing and automation is apt, but really, that’s not the point of the essay.

The point of the essay is to argue that government control is The Problem™, literally days before Silicon Valley Bank went under and had to be rescued by the government using government control of the banking industry. One of SVB’s biggest clients? The author of the essay. I guess he’s pretty glad of government control today!

It was bizarrely, weirdly Wrong with a capital W, T, and F the day he published it, and it has gotten even more Fd in the days since.

You would think the author would have better things to do than write such nonsense, like try to get some kind of return on the $400 million he invested into the recent purchase of Twitter with seemingly zero due diligence or common sense. But hey, he’s rich, and there are two different economies, apparently.

I don’t have well-polished arguments for anything related to the fediverse, but I have some impulses and instincts. They might be wrong, of course. But I see fears and concerns popping up that seem to be borne from some common fallacies, so I suspect those fears and concerns are not well-founded.

One example I keep seeing is resistance to large mastodon instances, that they’re antithetical to federation, that it is just re-inventing centralized social media to use them. We’d all be better off, some claim, with single-user instances! In this case, I get the concern about concentration of control, truly. Google seems to have damaged email as they’ve grown to more than a quarter of email use. That said, the nature of federation is complex, and it is not clear that very large instances are going to cause more problems than the fediverse already has. Which is not saying they won’t cause problems, just that avoiding them hasn’t helped avoid problems.

Already some of the largest servers around are blocked by a large number of smaller servers. For example, mastodon.social is the largest server currently, and it’s widely blocked. There are even a couple of well-known mastodon servers out there that make no attempt to federate with others whatsoever, for which most people are grateful. As I’ve posted before, some server admins are very, very quick to block and very, very unwilling to ever consider the possibility that they’ve misjudged a server. That’s their right! It’s frustrating to people aware of the issue, but has no known effect on people unaware of the issue, so that’s the system working as designed.

Let’s consider two hypothetical futures. One is a future in which the fediverse grows to more than 100 million active users, but no server has more than 200,000 active users, a number I picked as slightly higher than mastodon.social’s current active users. In this potential future, most people are on very small servers, even individual servers. There are at least a half-million servers, maybe millions. Each of them federates with… well, only with servers used by people someone follows, right? Which would make timelines seem desolate, since that’s the weakness of smaller servers. In fact, very smaller servers often rely on relays to deliver the wider fediverse, instead of having to rely solely on the follow list of a few users, or one. Of course, that puts quite a bit of load on the relay servers, making them somewhat expensive to operate, so I suspect there would be only a few very large relay servers, operated by larger organizations, and… wait a minute! Doesn’t this just shift the problem of control and concentration to the operators of relay servers? I think it does. What they choose to relay or not becomes near-synonymous with “what is mastodon,” and the fact that some smaller servers use smaller relays, or eschew relays altogether, won’t matter. Once the majority of servers uses relays, relays are the norm, and once one of those reliable and well-moderated relays grows to, say, 25% of servers, that’s the same position Google is in with email now. Call that scenario 1A.

The alternative scenario 1B might be that even with most of us on smaller servers, we don’t use relays. Instead, every server relies solely on federation with other servers. Of course, since popular users are spread around many different servers, that means each smaller server federates with a long list of popular servers, which eventually results in articles written in breathless tones about how mastodon has finally eclipsed pirated and adult content in terms of bandwidth, because now there are thousands of copies of all popular posts and images and videos, tens of thousands! The scaling issue and administration work involved would be incredibly limiting, with an obvious solution at hand: relay servers. Now we’re back to scenario 1A again, and back to similar issues to what we face today, with some relay servers attempting to provide every possible post from every possible server, while others are essentially opt-in and very focused, with a variety of relay servers in between the two extremes.

There’s another future, though! The future that some are afraid of is that companies like Medium and Mozilla and worse come along and build up gigantic servers. That more than 100 million active users means not 5 million servers, but 50,000 at most, with more than 10 million users each at a few of the biggest servers. To support more than 10 million users takes a lot of resources, so only large companies can afford that, and now, in scenario 2, we have most users using one of Google, Yahoo, Microsoft, Medium, and Mozilla. Together they support more than half of the 100 million active fediverse users, giving them outsize control over the fediverse. What they say goes. It’s a disaster, right?

I think to answer that question, it might server to explain what’s wrong with Google’s outsized impact on email. It has definitely made some things easier! For example, a gmail address is so common that “@gmail.com” is an option on some kiosks. I don’t have to repeat or spell out anything after the at sign with a gmail address. People know what I mean. Google keeps most spam at bay, although the junk mail that sneaks through ebbs and flows, despite Google’s extensive efforts. They deliver a decent service that most people don’t have to think about, and they do it at no out-of-pocket cost to most users. Of course, there are downsides. If Google decides your mail server doesn’t take spam seriously in exactly they way they do, no more and no less, they may decide not to deliver your email to any gmail user. You’re a peer, but not really a peer, since Google is so much larger. If Google decides they don’t like you as a user, you’re hosed. You lose access to everything, not just your email, and there is basically no recourse or appeal. You do have alternatives, of course. You can start over with a yahoo.com or hotmail.com address, for example. Or you can upgrade to fastmail.com or take a step sideways with hey.com. But it’s not pretty, and the more services they bundle together, and the more users they have, the uglier it gets.

So if Google’s hypothetical future mastodon server decides your server doesn’t do something in exactly the way they want, no more and no less, they may silence or block your server, so your posts don’t reach any “@gmastodon.com” user. If Google decides they don’t like you as a user, on either this or any other service they offer, you are very thoroughly hosed. You lose access to everything, email and mastodon included, without recourse or appeal. It’s really ugly.


But you do still have alternatives. If your favorite mastodon server focused on artists blocks the Google mastodon server, and they definitely would, you can create an account directly with that favorite mastodon server focused on artists, or any other non-Google mastodon server they don’t block. You can switch every time they block a server you’re on, and they do love to block servers.

This is how the fediverse is unlike single companies such as twitter, or facebook, or even spoutible. Even large servers aren’t re-inventing centralized social media, not as long as federation still exists. If the owner of twitter decides you don’t belong on twitter, that’s it. The same is true of facebook or spoutible. Some might be more or less likely to kick you off, but once you’re kicked off, that’s it. In contrast, there is no owner of the fediverse, nor of mastodon. If the primary developer of mastodon decides you don’t belong on the server he controls, then you’re kicked off of mastodon.social. But you have currently tens of thousands of other options, and you can spin up a new one just for yourself at any of a number of hosting companies for around $6 or €5 per month. I could even install one on the NAS in my house, and might eventually. That will still be true even if the server you’re kicked off of serves more than 20% of the entire fediverse. That still leaves another nearly 80% for you, which is 80% more than you have with a centralized option.

I fear less a future in which big companies operate big fediverse servers, and more a future in which mastodon growth is limited by scaling issues with network bandwidth (in the case of too many servers trying to federate) or human bandwidth (because server moderation is a thankless job, and currently largely unpaid), or both.

Bring on the big players. I will probably stick with smaller servers that federate with the big players. The good news is that you have a choice! If you hate the very idea of the fediverse being for everyone, and want it to stop growing already, you can easily block any new servers that come online and start to grow. You can use your server-wide blocklist to maintain a tiny little bubble not much larger than those of gab or the server associated with the insurrectionist former POTUS. You can make your view of the fediverse as small as you want it to be, but you can’t stop others from making it ever-larger. It’s a great wide world out there, and I’m looking forward to seeing it.

I observe without additional comment that my two most-downvoted recent comments on Hacker News are first, a suggestion that a commenters complaint about Tik-Tok showing him underage girls is related to a common desire to idealize youth but avoid illegal or troublesome activity, and second, a statement that kindness never goes out of style.

Anyone keeping track of the puzzle-solving adventures at my house, I did eventually finish all 2000 pieces.

I think Google’s current behavior is finally enough for me to commit fully to the @fastmail account I’ve been using for years now. I’ve been using @DuckDuckGo for years as well, and everything sent to my gmail address ends up in my Fastmail account, but email still goes through their servers first, because that’s the address I give people to reach me.

The thing that’s hard to give up is Google Docs. Buying Writely and XL2Web, then extending both products, they seem to be far and away above all competition in the space. But my loathing for Google keeps growing, maybe I’ll accept less to avoid them completely.

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.