so those layoffs were never about saving money, then
https://www.cnbc.com/2023/04/25/google-authorizes-70-billion-buyback.html
Here's a neat evolution trick.
Toxic animals can use bright colors to warn off predators, but those same colors make the animals more conspicuous. So how did those colors evolve without causing higher predation rates amongst the animals with the warning colors?
One possible answer: several steps are involved in evolving to full warning colors, and in the initial stages the warnings are hidden until the animal displays them to potential predators.
https://www.earth.com/news/how-did-warning-coloration-evolve-in-animals/
Epic Systems, the electronic medical records company famous for prediction algorithms that don't work, has apparently decided to continue that trend by partnering with Microsoft to use GPT-4.https://arstechnica.com/information-technology/2023/04/gpt-4-will-hunt-for-trends-in-medical-records-thanks-to-microsoft-and-epic
Absolutely revelatory piece from Yoav Goldberg casting light on an overlooked puzzle about last year: why did we need *reinforcement* learning (RLHF) to unlock the potential of language models? Why wasn’t supervised learning enough? #LLM #AI https://gist.github.com/yoavg/6bff0fecd65950898eba1bb321cfbd81
Once again, I'm reminded of how much the billionaire space race has absolutely destroyed my love of rockets.
10 years ago, I definitely would have been paying close attention to the current giant SpaceX launch. But because I know it's going to be used to launch hundreds of unregulated, unsafe, polluting, for-profit Starlink satellites at once, I just can't look.
Instead of being excited and awestruck by a new gigantic rocket launch, it just makes me want to puke.
Another jazz legend has passed. https://www.nytimes.com/2023/04/16/obituaries/ahmad-jamal-jazz-dead.html
@ct_bergstrom Here's another alleged example of common sense reasoning that fails if it just tweak it a bit. Shot:
@philipncohen Of course my reputation is so impeccable that Bard simply melts down, HAL-9000 style, when you ask the same question about me.
@ct_bergstrom And if you just switch it up a bit (substitute cow for fox) it gives an incorrect answer (since it leaves the cow alone with the corn). There are other examples of this you can discover for yourself if you plan with the examples in the appendix.
@ct_bergstrom I think the answer is clear. If you ask GPT4 how it arrived at the correct answer, it happily tells you that it's already familiar with the puzzle. 4/
@ct_bergstrom This, of course, is a very old riddle where the answer depends on understanding how to avoid predator/prey combinations. One question is: did GPT4 reason about this or did it memorize the answer because it saw it during training? 3/
Microsoft is really amping up the GPT AGI hype with some truly terrible papers. One recent paper ("Sparks of Artificial General Intelligence:
Early experiments with GPT-4" h/t @ct_bergstrom) has examples of what they consider to be evidence of "commonsense reasoning". Let's take a look! 1/
Life in the Crimson Bubble https://www.thecrimson.com/article/2023/4/14/pinker-academic-freedom-council/
The paper does state that they did this - ""With regard to data collection, we gather Gaokao and SAT questions from publicly available online
sources, along with their corresponding solutions or explanations".
No mention of all of how this is likely poisoned data.
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.