@Riedl I was a bit confused about the difference between "overall latency" and "end-to-end" latency. Since they didn't give the benchmark numbers for the "end-to-end" latency, it's hard to tell how big an improvement this really is. It looks like the upper bound, though, is about 2x.
@ct_bergstrom Another "Sparks of AGI" problem is the claim that GPT-4 can reason about emotions in complex situations. The example they give isn't particularly complex, so I came up with another one, well, I don't see a future career in therapy for this model.
After hearing Sebastian Bubeck talk about the Sparks of AGI paper today, I decided to give GPT-4 another chance.
If it can really reason, it should be able to solve very simply logic puzzles. So I made one up. Sebastian stressed the importance of asking the question right, so I stressed that this is a logic puzzle and didn't add anything confusing about knights and knaves.
Still, it gets the solution wrong.
so those layoffs were never about saving money, then
https://www.cnbc.com/2023/04/25/google-authorizes-70-billion-buyback.html
Here's a neat evolution trick.
Toxic animals can use bright colors to warn off predators, but those same colors make the animals more conspicuous. So how did those colors evolve without causing higher predation rates amongst the animals with the warning colors?
One possible answer: several steps are involved in evolving to full warning colors, and in the initial stages the warnings are hidden until the animal displays them to potential predators.
https://www.earth.com/news/how-did-warning-coloration-evolve-in-animals/
@heidilifeldman Wondering if this was part of the Dominion settlement in lieu of a public apology by Fox.
Epic Systems, the electronic medical records company famous for prediction algorithms that don't work, has apparently decided to continue that trend by partnering with Microsoft to use GPT-4.https://arstechnica.com/information-technology/2023/04/gpt-4-will-hunt-for-trends-in-medical-records-thanks-to-microsoft-and-epic
Absolutely revelatory piece from Yoav Goldberg casting light on an overlooked puzzle about last year: why did we need *reinforcement* learning (RLHF) to unlock the potential of language models? Why wasn’t supervised learning enough? #LLM #AI https://gist.github.com/yoavg/6bff0fecd65950898eba1bb321cfbd81
@adamgurri Right. But my point is advocates oversimplify by leaving out the second order costs of changing the density of existing cities - e.g. gentrification, less affordable housing, higher taxes to support infrastructure improvements, construction disruption, increased costs for existing residents etc.
@adamgurri What really matters, of course, is urban density, unless you think it's great idea to build in range, crop, or forest land to increase population density for environmental reasons. That's why I picked a city like Seattle. Try any other west coast city and you'll get the same results. Here's LA
@adamgurri High population density advocates never factor in the impact of higher real estate and infrastructure costs in their utopian schemes. Construction costs here in Seattle have more than doubled over the last 10 years.
@Riedl Successfully predicted failure.
@Riedl My website for my ancient iOS app is also part of the training set, which definitely disproves Common Crawl's claim that "it tries to prioritize the most important and reputable sites."
@mmitchell_ai The AI hype version of the sci-fi movie where intelligent life is discovered in a distant galaxy and THEY LOOK JUST LIKE US!!!
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.