Show more

DeepMind's paper refutes this last claim, and finds that both are equally useful.
The differences between DeepMind & OpenAI's papers matter in terms of forecasting how big LLMs need to get. They arrived at these different conclusions because DeepMind did more learning rate tuning. This blog post severelytheoretical.wordpress. hypothesizes that DeepMind's paper might also be not doing enough hyperparameter tuning, and the scaling law may be less severe, perhaps not even a power law.
3/3

Show thread

On #TheDataExchangePod I speak with Mark Chen, Research Scientist at OpenAI. We discuss the evolution of DALL·E, key research developments that led to DALL·E 2, data sources, safety measures, ML models needed for its success. #machinelearning #dalle2 #dalle #AI #generativeai thedataexchange.media/explorin

I do however have high hopes for #blogic and RDF+Surfaces to make the interpretation of RDF vocabularies interoperable across organizations

w3c-cg.github.io/rdfsurfaces/

Show thread

The latest issue of 'Ahead of AI' is now available!

This edition covers my top 10 papers of the year, as well as trends in the AI industry, notable developments in open source projects, and my personal yearly review routine.

Check it out at the link below and have a happy new year!

magazine.sebastianraschka.com/

How good of a BERT can one get in ONE DAY on ONE GPU?

With all the recent studies about scaling compute up, this paper takes a refreshing turn and does a deep dive into scaling down compute.

It's well written, stock full of insights. Here is my summary and my opinions.

arxiv.org/abs/2212.14034

🧶 1/N

has for building pipelines. What is the counterpart for ? Any pointers & ideas would be very welcome.

With the advent of #ChatGPT, everyone is talking about large language models. But how do they work? Initially, such models were trained to complete sentences.

But they exhibit exciting capabilities that can be invoked by feeding them "prompts."

Read our Prompt Engineering Guide for a quick overview of the current state of this field.

#nlproc #gpt #llm
inovex.de/de/blog/prompt-engin

Scikit-learn 1.2 is out: github.com/scikit-learn/scikit

Was an eventful December & I totally missed the new release of my favorite #machinelearning library!

My personal highlights are around the HistGradientBoostingClassifier (if you haven't used it yet, it's a LightGBM impl that works really well)

It now supports

1. interaction constraints (in trees, features that appear along a particular path are considered as "interacting")
2. class weights
3. feature names for categorical features

😮 Exciting times:

Surprised to see a #ChatGPT style AI model integrated with Web search so soon!

The new #YouChat provides links to sources, but just like other AI models also makes many mistakes.

Will be interesting to see how people use it.

you.com/search?q=what+was+the+

#AI #NLProc #NLP #IR

I asked #chatGPT for 4 visual descriptions involving technology from the book 'Snowcrash' (so insane that you can now ask for stuff like that!?). I then copy-pasted them into Midjourney. Here are some results.

#midjourney #midjourneyV4 #aiart #aiartist #aiartcommunity

Hey, I am just signed up a few days ago and want to introduce myself.
I am a #machinelearning researcher focusing on deep neural nets. My passion is sharing all kinds of stuff about machine learning & open source. (Some of you may know me from my books “Python Machine Learning” and “Machine Learning with PyTorch and Scikit-Learn”.)
I love to teach others, and am currently working as Lead AI Educator at Lightning AI, and also an Assistant Prof of Statistics at the University of Wisconsin-Madison.

We've been working on new prodi.gy workflows that let you use the OpenAI API to kickstart your annotations, via zero- or few-shot learning. We've just published the first recipe, for NER annotation 🎉 github.com/explosion/prodigy-o

Here's what, why and how. 🧵

Let's say you want to do some 'traditional' NLP thing, like extracting information from text. The information you want to extract isn't on the public web — it's in this pile of documents you have sitting in front of you.

Please donate to the Internet Archive if you can.
archive.org/donate

We are a bargain! Serving millions every day with books music video and web archives.

Please help keep everything freely available.

Introducing: LAION 5B, a large-scale dataset for research purposes consisting of 5,85B CLIP-filtered image-text pairs. 2,3B contain English language, 2,2B samples from 100+ other languages
#OpenData #MachineLearning
laion.ai/blog/laion-5b/

This article sheds light on the question of why machine learning products mostly do not get into production even though they are enjoying an ongoing boom. Additionally, it shows how MLOps can help to tackle these challenges in the machine learning life cycle.

inovex.de/de/blog/a-conceptual #ml #mlops

Controversial #machinelearning suggestions by Yann LeCun at #NeurIPS2022 Self-Supervised Learning workshop!

He suggests:

(1) abandoning generative AI architectures
(in favour of joint embedding ones)

(2) abandoning probabilistic models
(in favour of energy-based models)

(3) abandoning contrastive methods
(in favour of regularized methods)

(3) abandoning RL where possible
(in favour of model-predictive control)

Related talk:
youtu.be/VRzvpV9DZ8Y

Source: twitter.com/BeingMIAkashs/stat

Monolith (from ByteDance, creator of TikTok) is an interesting system for online training that addresses two problems faced by modern recommenders: (1) Concept Drift - ​​underlying distribution of the training data is non-stationary; ( 2) Features used by models are mostly sparse, categorical and dynamically changing. #recsys #MachineLearning

arxiv.org/abs/2209.07663v2

Like @timfinin I tried ChatGPT on last semester's final exam for my lecture "Information Service Engineering", with questions/tasks on Knowledge Graphs, basic NLP, and basic ML. It performed surprisingly well (for SPARQL it achieved 11 out of 12 points). Even for more complex questions like performing an evaluation or constructing an FSA, it performed not flawlessly, but not so bad. Overall, ChatGPT would have passed. Congratulations!

#ChatGPT #NLP #FIZISE #ML #knowledgegraph #SPARQL

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.