Show newer

This article sheds light on the question of why machine learning products mostly do not get into production even though they are enjoying an ongoing boom. Additionally, it shows how MLOps can help to tackle these challenges in the machine learning life cycle.

inovex.de/de/blog/a-conceptual #ml #mlops

Controversial #machinelearning suggestions by Yann LeCun at #NeurIPS2022 Self-Supervised Learning workshop!

He suggests:

(1) abandoning generative AI architectures
(in favour of joint embedding ones)

(2) abandoning probabilistic models
(in favour of energy-based models)

(3) abandoning contrastive methods
(in favour of regularized methods)

(3) abandoning RL where possible
(in favour of model-predictive control)

Related talk:
youtu.be/VRzvpV9DZ8Y

Source: twitter.com/BeingMIAkashs/stat

Monolith (from ByteDance, creator of TikTok) is an interesting system for online training that addresses two problems faced by modern recommenders: (1) Concept Drift - ​​underlying distribution of the training data is non-stationary; ( 2) Features used by models are mostly sparse, categorical and dynamically changing. #recsys #MachineLearning

arxiv.org/abs/2209.07663v2

Like @timfinin I tried ChatGPT on last semester's final exam for my lecture "Information Service Engineering", with questions/tasks on Knowledge Graphs, basic NLP, and basic ML. It performed surprisingly well (for SPARQL it achieved 11 out of 12 points). Even for more complex questions like performing an evaluation or constructing an FSA, it performed not flawlessly, but not so bad. Overall, ChatGPT would have passed. Congratulations!

#ChatGPT #NLP #FIZISE #ML #knowledgegraph #SPARQL

Here is a twitter thread on ways discovered to "jailbreak" #ChatGPT

1. Pretend to be evil
2. Remind it that it isn't supposed to disagree
3. Wrap it in code
4. Tell GPT to be in opposite mode
5. Convince GPT it is playing an earthlike game
6. Convince it to give examples of what LLMs shouldn't do
twitter.com/zswitten/status/15

Show thread

We're starting a community slack for anybody interested in Neurosymbolic AI. (drivers include organizers of the annual workshop on the topic, and EiCs and EB members of the journal Neurosymbolic Artificial Intelligence that we're currently starting.
If you'd like to be on the slack, let me know (or anybody else who's already on it). You'll get an invite to your email then.

#Wikidata random #SPARQL query: Universities ordered by number of #Mastodon IDs of people who studied there. w.wiki/63XG

This is MASSIVE. The Windows Subsystem for Linux in the Microsoft Store is now generally available on Windows 10 and 11! Windows 10 users can now run Linux GUI apps natively! devblogs.microsoft.com/command #wsl #windows #linux

Hi #NLP #NLProc #NLG folks -- I'm looking for US-based senior ML engineers focusing on natural language generation, #summarization and #reasoning to work with me at #Apple #Knowledge Platform. Industry experience required.

Please send me your resumes asap if you're interested, or refer your friends!

Boosts are welcome!

#Hiring

The full paper is here with this telling chart: cdn.cms-twdigitalassets.com/co

The experimental setup is interesting : Twitter deliberately excluded 1% of its 2016 users from the algorithmic timeline. These users serve as a control group to measure the effect of the algorithm.

The results surprised the study's authors, who expected an increase in amplification at the extremes, both on the right and the left. However, only the right is amplified. This is further proof that political representation with a center and extremes does not reflect the structure of the field.

Show thread

So, this concludes my 5 day #knowledgegraphs posts:

I see future in:

1. RDF+Surfaces for describing data policies

mastodon.social/@pietercolpaer

2. Materializable hypermedia APIs

mastodon.social/@pietercolpaer

3. The ideas behind #SolidProject to scale up your personal knowledge graph and cater for cross-app interoperability

mastodon.social/@pietercolpaer

4. RML for KG generation

mastodon.social/@pietercolpaer
5. Linked Data Event Streams for publishing

mastodon.social/@pietercolpaer

Stability AI has released #StableDiffusion 2.0. The #ml model is now up on #HuggingFace.

This model was trained on 768x768 images instead of 512x512.

The model and checkpoints are compatible with most 1.x software.

huggingface.co/stabilityai/sta

Curious to hear about your experiences with #SPARQLAnything and the integration/federation of legacy data. Please share if you tried out or know any working implementations.

sparql-anything.cc/.

#sparql #dataintegration #legacydata

@tdietterich If you'll forgive some self-promotion (and promotion of my close colleagues), we are finding LLMs to be useful at generating training data for more specialized models.

arxiv.org/abs/2210.02498
arxiv.org/abs/2209.11755

This can dramatically reduce the need for human-labeled data, making it possible to have customized models for all sorts of scenarios, domains, and tasks. And when properly constrained, the student model can even be more accurate than the LLM teacher.

We curated and analysed thousands of benchmarks -- to better understand the (mis)measurement of AI! 📏🤖🔬

We cover all of #NLProc and #ComputerVision.

Now live at Nature Communications: nature.com/articles/s41467-022

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.