Trying to find the name for that API design anti pattern where a function has overloaded behaviors based on the argument type or value, but the distinction is riddled with error-prone ambiguities. Example: you can pass a list of values or just the value (but don't try to pass a value that's a list that way); you can pass any string, except when it's "default", something special happens (hopefully that never occurs in real data). Any pointers?
🐍2️⃣🚀 The **final** release candidate of Python 3.13 is out!
➡️ Library maintainers, please upload wheels to PyPI **now**, especially if you have compiled extensions (there are no more ABI changes), so everything is in place for the big 3.13.0 release in 3 weeks!
🔒 Also! Security releases have been made for the full set of Python 3.8 - 3.12. Please upgrade!
🧪 https://dev.to/hugovk/help-test-python-313-14j1
#Python #Python313 #RC #RC2 #Python312 #Python311 #Python310 #Python39 #Python38
It's that time of the year again. Trying to schedule a vaccine appointment with CVS... Log in, schedule appointment. "Something went wrong on our end". Sign in as guest, schedule appointment. "Sorry we can't schedule Christian's vaccine. Christian isn't eligible for a Flu right now." No wonder the uptake percentage is so low with that UX from the quasi-monopoly for vaccinations in the U.S.
Does anyone have recommendations for electronic music for long coding (or otherwise creative) sessions? My personal on-and-off favorite for close to a decade now has been the "8 Hour Study Mix" by all-nighter aka delta notch (taken down by YouTube but still retrievable on google). In terms of it maintaining the flow, I'd be inclined to call it a masterpiece. Also various mixes from The Grand Sound on YouTube (night drive, best progressive house mix etc).
Sad to see AnandTech shutting down...
https://www.anandtech.com/show/21542/end-of-the-road-an-anandtech-farewell
"Dan, I'd like to invite you to attend an hands-on tutorial on August 7th at 1 PM ET where we will guide you through the essentials of setting up, ingesting, and querying an Iceberg-based lakehouse using Polaris Catalog on Snowflake."
I really have no idea what these emails are about that I keep getting. I can't imagine this course is any use to me in a heat-wave in California.
Valve should hire the guy who made that HL3 fan film, today. https://www.dsogaming.com/videotrailer-news/this-is-what-half-life-3-should-look-like-if-it-ever-comes-out/
Quite impressive work on LLM interpretability, with applications.
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
This looks cool: Mathieu Blondel and Vincent Roulet have posted a first draft of their book on arXiv:
https://arxiv.org/abs/2403.14606
I wuz robbed.
More specifically, I was tricked by a phone-phisher pretending to be from my bank, and he convinced me to hand over my credit-card number, then did $8,000+ worth of fraud with it before I figured out what happened. And *then* he tried to do it again, a week later!
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
1/
Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.
h/t @th3_protoCOL for the image
https://krebsonsecurity.com/2024/01/using-google-search-to-find-software-can-be-risky/
... the wild and probably bogus details aside though, I've never bought into the idea that hallucinating or BSing is an unsolvable intrinsic flaw of LLMs, since it may take not much more than operationalizing the process we humans use to construct an internally consistent world model, which is to explore a range of consequences that follow from our beliefs, spot inconsistencies, and update our world model accordingly. And that looks like something that could be attempted in well-trodden paradigms like RL or GANs or something that's not much more complex, so my bet would be that we should've largely worked it out within 4-5y.
Woke up from a strange vivid dream this morning in which I was attending an ML symposium and someone gave a talk on overcoming the hallucination problem with LLMs. The main slide had a graph of nodes representing LLM statements and they were doing some sort of graph diffusion process where the "curl" operator was pinpointing the contradictory/inconsistent statements, which they could then follow to update the weights to discourage those from occurring. Needless to say I immediately tried to arrange an improptu mtg between the speaker and some DL luminaire who was also there to get them to adopt it.😂
CTO at Intheon. Opinions are my own.