Show newer
Joe boosted

Disney sending the lawyers down to the mouse vault to use their tiny hammer to break the "find out" glass and go for the governor lololol, extremely here for this

@Riedl I was a bit confused about the difference between "overall latency" and "end-to-end" latency. Since they didn't give the benchmark numbers for the "end-to-end" latency, it's hard to tell how big an improvement this really is. It looks like the upper bound, though, is about 2x.

@ct_bergstrom Another "Sparks of AGI" problem is the claim that GPT-4 can reason about emotions in complex situations. The example they give isn't particularly complex, so I came up with another one, well, I don't see a future career in therapy for this model.

Joe boosted

After hearing Sebastian Bubeck talk about the Sparks of AGI paper today, I decided to give GPT-4 another chance.

If it can really reason, it should be able to solve very simply logic puzzles. So I made one up. Sebastian stressed the importance of asking the question right, so I stressed that this is a logic puzzle and didn't add anything confusing about knights and knaves.

Still, it gets the solution wrong.

Joe boosted
Joe boosted

Here's a neat evolution trick.

Toxic animals can use bright colors to warn off predators, but those same colors make the animals more conspicuous. So how did those colors evolve without causing higher predation rates amongst the animals with the warning colors?

One possible answer: several steps are involved in evolving to full warning colors, and in the initial stages the warnings are hidden until the animal displays them to potential predators.

earth.com/news/how-did-warning

@heidilifeldman Wondering if this was part of the Dominion settlement in lieu of a public apology by Fox.

Epic Systems, the electronic medical records company famous for prediction algorithms that don't work, has apparently decided to continue that trend by partnering with Microsoft to use GPT-4.arstechnica.com/information-te

Joe boosted

Absolutely revelatory piece from Yoav Goldberg casting light on an overlooked puzzle about last year: why did we need *reinforcement* learning (RLHF) to unlock the potential of language models? Why wasn’t supervised learning enough? #LLM #AI gist.github.com/yoavg/6bff0fec

@adamgurri Right. But my point is advocates oversimplify by leaving out the second order costs of changing the density of existing cities - e.g. gentrification, less affordable housing, higher taxes to support infrastructure improvements, construction disruption, increased costs for existing residents etc.

@adamgurri What really matters, of course, is urban density, unless you think it's great idea to build in range, crop, or forest land to increase population density for environmental reasons. That's why I picked a city like Seattle. Try any other west coast city and you'll get the same results. Here's LA

@adamgurri High population density advocates never factor in the impact of higher real estate and infrastructure costs in their utopian schemes. Construction costs here in Seattle have more than doubled over the last 10 years.

NYTimes repeating Elon's company line of a rocket explosion as a success. US media 2023.

@Riedl My website for my ancient iOS app is also part of the training set, which definitely disproves Common Crawl's claim that "it tries to prioritize the most important and reputable sites."

@mmitchell_ai The AI hype version of the sci-fi movie where intelligent life is discovered in a distant galaxy and THEY LOOK JUST LIKE US!!!

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.