Chuck Darwin

“We’ve achieved peak data and there’ll be no more.”

OpenAI’s cofounder and former chief scientist,
#Ilya #Sutskever, made headlines earlier this year after he left to start his own AI lab called
Safe Superintelligence Inc.

He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the
Conference on Neural Information Processing Systems (NeurIPS).

“Pre-training as we know it will unquestionably end,” Sutskever said onstage.

This refers to the first phase of AI model development,
when a large language model learns patterns from vast amounts of unlabeled data
— typically text from the internet, books, and other sources.

During his NeurIPS talk, Sutskever said that,
while he believes existing data can still take AI development farther,
the industry is tapping out on new data to train on.

This dynamic will, he said, eventually force a shift away from the way models are trained today.

He compared the situation to fossil fuels:
just as oil is a finite resource,
the internet contains a finite amount of human-generated content.

“We’ve achieved peak data and there’ll be no more,” according to Sutskever.

“We have to deal with the data that we have. There’s only one internet

Next-generation models, he predicted, are going to “be agentic in a real ways.”

Agents have become a real buzzword in the AI field.

While Sutskever didn’t define them during his talk, they are commonly understood to be an autonomous AI system that performs tasks, makes decisions,
and interacts with software on its own.

Along with being “agentic,” he said future systems will also be able to reason.

Unlike today’s AI, which mostly pattern-matches based on what a model has seen before,
future AI systems will be able to work things out step-by-step in a way that is more comparable to thinking.

The more a system reasons, “the more unpredictable it becomes,” according to Sutskever.

He compared the unpredictability of “truly reasoning systems” to how advanced AIs that play chess “are unpredictable to the best human chess players.”

“They will understand things from limited data,” he said.

“They will not get confused.”

On stage, he drew a comparison between the scaling of AI systems and evolutionary biology,
citing research that shows the relationship between brain and body mass across species.

He noted that while most mammals follow one scaling pattern, hominids (human ancestors) show a distinctly different slope in their brain-to-body mass ratio on logarithmic scales.

He suggested that, just as evolution found a new scaling pattern for hominid brains,
AI might similarly discover new approaches to scaling beyond how pre-training works today.
theverge.com/2024/12/13/243208

OpenAI cofounder Ilya Sutskever predicts the end of AI pre-training

During a rare public appearance at NeurIPS, the OpenAI…

The Verge
Dec 14, 2024, 18:10 · · · 0 · 0
Chuck Darwin

OpenAI has appointed Paul M. Nakasone,
a retired general of the US Army and a former head of the National Security Agency ( #NSA ),
to its board of directors, the company announced on Thursday.
OpenAI says Nakasone will join its Safety and Security Committee, which was announced in May and is led by CEO Sam Altman, “as a first priority.”
Nakasone will “also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”
#Nakasone was nominated to lead the NSA by former President Donald Trump, and directed the agency from 2018 until February of this year.
Before Nakasone left the NSA, he wrote an op-ed supporting the renewal of Section 702 of the Foreign Intelligence Surveillance Act, the surveillance program that was ultimately reauthorized by Congress in April.
Recent departures tied to safety at OpenAI include co-founder and chief scientist Ilya #Sutskever, who played a key role in Sam Altman’s November firing and eventual un-firing,
and Jan #Leike, who said on X that “safety culture and processes have taken a backseat to shiny products.”
theverge.com/2024/6/13/2417807

Former head of NSA joins OpenAI board

OpenAI has appointed Paul M. Nakasone, a retired general…

The Verge
lichess

Lichess' puzzle database (CC0 licensed! see: database.lichess.org/#puzzles) has a "cameo" in a pre-print (arxiv.org/pdf/2312.09390.pdf) about supervising stronger LLMs with weaker ones.
The pre-print is authored by #OpenAI 's Ilya #Sutskever and his superalignment team.

#chess #lichess #opendata #LLM #AI

meta_blum

Da brodelt es im Board, der CEO wird entlassen, kommt nach vier Tagen zurück und das Board wird neu besetzt. Wer entscheidet da eigentlich was? Welche Funktion hat der Aufsichtsrat und wer bestellt ihn? Weiß jemand von Euch mehr dazu?
#OpenAI #Sutskever #Altman #ChatGPT

aproitz

#Sutskever war offensichtlich nicht klar, was er mit seinem #OpenAI - Putsch auslösen würde. Zurück wird ein Gerippe von einer Firma bleiben, die keine Investoren mehr findet und höchstens noch ein bisschen Forschung betreiben wird. Der Rest sitzt in einer neuen Unterfirma bei Microsoft und freut sich mit #SamAltman und #GregBrockman.

Marcel Waldvogel

With a big reaction from the employees. Makes things look more confusing than ever. Also, the role of chief scientist/board member Ilya #Sutskever seems strange: Apparently, he must have actively voted #Altman out, but now regrets it? 🤷

(Plenty of interesting more-or-less informed guesswork in the Slashdot discussion.)
#OpenAI
tech.slashdot.org/story/23/11/

Nearly 500 OpenAI Employees Threaten To Quit Unless Board Resigns - Slashdot

OpenAI was in open revolt on Monday with 490 employees…

tech.slashdot.org
HistoPol (#HP) 🏴 🇺🇸 🏴

@TheGuardian

(4/n)

...letter’s signaturies are “unable to work for or with people that lack competence, judgment and care for our mission and employees.”

Technology journalist #KaraSwisher has posted the letter on X (formerly Twitter) – and points out that OpenAI’s chief scientist, #IlyaSutskever, has signed it, even though he is a member of the board that fired Altman.

As flagged earlier, #Sutskever has posted on #X today that “I deeply regret...

twitter.com/i/status/172660301

Boris Steipe

There has been a hiatus in my Sentient Syllabus Writing, while I was lecturing and thinking through things.

But I have just posted an analysis on the #Sutskever / #Altman schism at #OpenAI and hope you find it enlightening.

Enjoy!

#ChatGPT #GPT4 #HigherEd #AI #AGI #ASI #ChatGPT #generativeAI #Bostrom #AI-ethics #Education #University #Academia

sentientsyllabus.substack.com/

Priests and Badgers

What generative AI can't – or couldn't - do

sentientsyllabus.substack.com