Show newer

@pervognsen @rygorous
Man that thread. Those Denthor and Aspyhxia rotating cube (and logo IIRC) demos must have been how I learned some first 3d math, and then there was some plaintext doc that had the 4x4 matrix rundown (I think Lithium/VLA and Andre Yew). There was also a commanche-style heightfield raytracer from Stephen H Don and a nice/clean doom-style renderer called MOOD by Alexei Frounze, all of which I got from a good friend who had internet. Departure ticket from the Pascal life was a C++ engine source that was, I think, called Focus, and a Q2-like engine named Twister

@rooster@chaosfem.tw That could have featured in the opening scene in Forrest Gump FWIW.

@tdietterich I'm actually impressed that they find time for a book at all amid fiercest competition with OpenAI, and now from a bit of an underdog position. There could also be more on how this can be synthesized into a probabilistic programming language like e.g. numpyro (sampling, variational inference).

@tdietterich I combed through it (in a rush). It's a nice very dense collage of much of the core math of today's ML/DL scene. There's not much of an arc though, and chapters aren't held together by much, so the topic still deserves a much more didactic textbook some day.

This looks cool: Mathieu Blondel and Vincent Roulet have posted a first draft of their book on arXiv:
arxiv.org/abs/2403.14606

Whenever some online customer support chat comes up, I start asking it programming questions about python to see if it's just AI. Someday I'll reach a real person at Bank of America who knows python and we'll both be very confused.

@zeux Might keep an eye on JetBrains IDEs... I've mainly used them for Python and much less so with C++, but there have been some C++ and multi-lang related tools coming from that direction lately (Rider, Fleet, CLion Nova).

@deobald @simon Thanks! Yeah figured that'd be a reason, I'm impressed they're still busy adding things like SIMD optimizations etc to it as we speak. And yeah, certainly Lisp is a great language for pumping out crazy amounts of functionality especially, when one's dealing with tree structures like json, queries, and responses.

@simon 70% Steel Bank Common Lisp, 20% Rust. There's definitely a connoisseur at work. Kinda amazed that someone would whip that out in 2023 to make SotA software of that size.

@rabaath Funny thing is that it feels clunky to me even though I didn't come from R!

So what are everyone's bets as to what the secretive hardware form factor is that Jony Ive is to be working on with OpenAI? Here's mine: it's probably something that looks like (1) a bluetooth headset and (2) after that something like Snap's Spectacles.

Reflecting on the chunky cables featuring in Half-Life 2 and later, I came to the conclusion that Combine electrical systems must clearly be running on 1.5 Volts!

@robpike When I got to "One standardized version of G-code, known as BCL (Binary Cutter Language), is used only on very few machines." I could suddenly sense the desperation Owen Lars must have felt as he found himself without a droid that speaks the binary language of moisture vaporators.

I wuz robbed.

More specifically, I was tricked by a phone-phisher pretending to be from my bank, and he convinced me to hand over my credit-card number, then did $8,000+ worth of fraud with it before I figured out what happened. And *then* he tried to do it again, a week later!

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2024/02/05/cyb

1/

@grumpygamer Know the "bus factor"? Maybe we should call it the 737 factor now

Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.

h/t @th3_protoCOL for the image

krebsonsecurity.com/2024/01/us

... the wild and probably bogus details aside though, I've never bought into the idea that hallucinating or BSing is an unsolvable intrinsic flaw of LLMs, since it may take not much more than operationalizing the process we humans use to construct an internally consistent world model, which is to explore a range of consequences that follow from our beliefs, spot inconsistencies, and update our world model accordingly. And that looks like something that could be attempted in well-trodden paradigms like RL or GANs or something that's not much more complex, so my bet would be that we should've largely worked it out within 4-5y.

Show thread

Woke up from a strange vivid dream this morning in which I was attending an ML symposium and someone gave a talk on overcoming the hallucination problem with LLMs. The main slide had a graph of nodes representing LLM statements and they were doing some sort of graph diffusion process where the "curl" operator was pinpointing the contradictory/inconsistent statements, which they could then follow to update the weights to discourage those from occurring. Needless to say I immediately tried to arrange an improptu mtg between the speaker and some DL luminaire who was also there to get them to adopt it.😂

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.