@jeffjarvis Shocking that a bunch of narcissistic SV bros would assume that a "conscious" AI is going to be just like, well, them.
What you need to know about the crackpot at the center of the new AI panic cycle
https://twitter.com/xriskology/status/1642155518570512384?s=61&t=Ugdi4XBKf_2ovJ1y9hKs4w
@ct_bergstrom Hadn't thought about edginess from the pitch clock. Maybe it would have been better to have the ump just decide when the pitcher was taking too long. Like the old days when every ump had a different strike zone.
@ct_bergstrom Nice. Used to be a big fan a few years ago, but lost interest as small ball died and the games became interminable. Might jump back in if the play clock/end of the shift/larger bases etc. stuff works!
@Riedl Best part was the oxymoronic call for "immediate multilateral agreements".
Reality check: the SALT 2 nuclear agreements between the US and Soviet Union took 2 years to negotiate.
@Riedl Just replace "AI" with "Fox News" and I think he has something.
@JamesGleick I automatically skip any NYTimes "analysis" if Peter Baker is in the byline.
@pbump The fact that more that twice as many Republicans are concerned about TikTok as compared to a disease that's still killing 100k people/year in the US is pretty telling.
P.S. That bar chart would be a lot easy to understand if you stuck with one coloring scheme and just labeled the bars by political party.
@ct_bergstrom @aidybarnett LLMs are the technology version of Gresham's Law..
@ct_bergstrom So For You will be the same as the Following tab but with Twitter Blue spam. Awesome!
Connecting AIs / LLMs to the Internet feels like a discontinuity we should be particularly mindful of. The damage caused by an AI gone rogue could be large. Not because of AGI, I don't buy that. Just a program being smart enough to take serious action very fast, and dumb enough to have no idea of the consequences.
In 1975, scientists declared a moratorium on recombinant DNA research, so there could be an open conversation about safety.
https://en.m.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
Might be time for that in AI.
@ct_bergstrom And, of course, I didn't read your follow up because ... social media! My apologies for not checking to see if you had edited your response.
@ct_bergstrom Not that anyone asked, but I strongly disagree about the quality of this piece. It is *extremely* hyperbolic and much of the piece is based on the incorrect assumption that GPT-4 is AGI.
They anthropomorphize : "a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings".
They makes sensational unsupported claims: "By 2028, the U.S. presidential race might no longer be run by humans." and "A.I. could rapidly eat the whole of human culture " and "unleashing godlike powers decoupled from responsibility, could be the very reason the West loses to China.".
C'mon man. Regulation is certainly a good idea, but these kind of scare pieces aren't going to help policy makers develop reasonable legislation. See TikTok.
@Riedl I'm old enough to remember when LeCun overhyped Galatica and blamed everyone but himself for the bad public response. Now he's off the auto-regressive model bandwagon. No NIH going on here, nope not at all.
@wc_ratcliff@ecoevo.social
Here's a recent very rough guess I made:
OpenAI's GPT-4 paper (aka press release) revealed nothing about their model so you have to make a Fermi estimate.
They did publish info on GPT-3 training - 1287 MWh to train GPT-3 on 300B tokens. If you assume 10x for GPT-4 and inference costs about half training (no backprop) you get about 3e-3 KWh/1000 tokens. That's probably an upper bound.
https://arxiv.org/ftp/arxiv/papers/2104/2104.10350.pdf
https://arxiv.org/pdf/2005.14165.pdf
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.