This looks cool: Mathieu Blondel and Vincent Roulet have posted a first draft of their book on arXiv:
arxiv.org/abs/2403.14606

Whenever some online customer support chat comes up, I start asking it programming questions about python to see if it's just AI. Someday I'll reach a real person at Bank of America who knows python and we'll both be very confused.

So what are everyone's bets as to what the secretive hardware form factor is that Jony Ive is to be working on with OpenAI? Here's mine: it's probably something that looks like (1) a bluetooth headset and (2) after that something like Snap's Spectacles.

Reflecting on the chunky cables featuring in Half-Life 2 and later, I came to the conclusion that Combine electrical systems must clearly be running on 1.5 Volts!

I wuz robbed.

More specifically, I was tricked by a phone-phisher pretending to be from my bank, and he convinced me to hand over my credit-card number, then did $8,000+ worth of fraud with it before I figured out what happened. And *then* he tried to do it again, a week later!

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2024/02/05/cyb

1/

Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.

h/t @th3_protoCOL for the image

krebsonsecurity.com/2024/01/us

... the wild and probably bogus details aside though, I've never bought into the idea that hallucinating or BSing is an unsolvable intrinsic flaw of LLMs, since it may take not much more than operationalizing the process we humans use to construct an internally consistent world model, which is to explore a range of consequences that follow from our beliefs, spot inconsistencies, and update our world model accordingly. And that looks like something that could be attempted in well-trodden paradigms like RL or GANs or something that's not much more complex, so my bet would be that we should've largely worked it out within 4-5y.

Show thread

Woke up from a strange vivid dream this morning in which I was attending an ML symposium and someone gave a talk on overcoming the hallucination problem with LLMs. The main slide had a graph of nodes representing LLM statements and they were doing some sort of graph diffusion process where the "curl" operator was pinpointing the contradictory/inconsistent statements, which they could then follow to update the weights to discourage those from occurring. Needless to say I immediately tried to arrange an improptu mtg between the speaker and some DL luminaire who was also there to get them to adopt it.😂

Während er abgeschlossenen Sozialwissenschaftlern rät, in die IT zu gehen, rät er Sozialwissenschaftlerinnen dazu, Gender Studies in Betracht zu ziehen zur weiteren Fortbildung.
Waren wir nicht schon mal weiter? Z.B. soweit, dass auch Mädels alles können und ihnen die Welt offen steht?

Conda is moving our social media presence from Twitter/X to Mastodon and LinkedIn at the start of 2024. It's past time to move into spaces that are welcoming and more in line with our community values. Going forward, you can find us at
🐘 @conda (fosstodon.org/@conda) on Mastodon
🔗 Conda Community (linkedin.com/company/condacomm) on LinkedIn

Announcement: conda.org/blog/2023-12-27-soci
We hope to see you on Mastodon and LinkedIn in 2024!

Wondering if anyone out there is using LLMs are a proposal heuristic in NAS. Would seem fruitful (e.g. after fine-tuning on NeurIPS). Add in reinforcement learning for bonus points. It's not quite recursive self-improvement since re-architecting and retraining the LLM would be a slow, expensive, and human-in-the-loop step.

Old days: ?SYNTAX ERROR?

These days: <scratches head under cap> ya know, I'm not sure we can go any further with this thing, boss.

Show thread

'Microcanonical Hamiltonian Monte Carlo', by Jakob Robnik, G. Bruno De Luca, Eva Silverstein, Uroš Seljak.

jmlr.org/papers/v24/22-1450.ht

#microcanonical #langevin #hamiltonian

So I suppose Altman and Brockman are now going on to found NeXTAI, am I guessing that right?

So I guess that's one way to read the top-right corner of my desktop (hat tip to my wife). "It's 10:44PM. The year is 3500 RPM, and it is 62C outside." Pretty dystopian!

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.