Every meeting I attend is exceptional. /flex
Unlike most judgments against a defendant, punitive damages awards are not dischargeable in bankruptcy so long as the relevant cause of action was based upon willful and malicious actions. $75M of that judgement is punitive.
What will likely happen instead is endless drawn-out appeals until either he's too broke to continue - and has nothing left to pay them - or he just dies.
If he loses - and he will, by a landslide - he'll throw ketchup at the walls of his cell.
That is, assuming we all get off if our duffs and vote. And get our friends to do so.
@Biggles @matt @danilo @maria @dalias I agree, "actively dangerous for the lazy or gullible" is a good summary of where we are today
That's why I spend so much effort trying to counter the hype and explaining to people that this stuff isn't science fiction AI, it's spicy autocomplete - it takes a surprising amount of work to learn how to use it effectively
@simon @matt @danilo @maria @dalias
"Useless for learning" is a bit of a straw man.
More accurate perhaps is "actively dangerous for the lazy or gullible". As an example, I point to *multiple* instances of lawyers turning in phony case citations. These people should absolutely know better - yet it's happened multiple times, and will happen again.
The llm is presented in the news as an AI - artificial intelligence - and source of information. To most people, that brings to mind a trusted advisor, or subject matter expert - and when they say "provide 5 legal citations that support my argument" - boy, it sure sounds convincing, because the AI is generally incapable of saying "I don't know" - and that's the dangerous bit.
Lots of tools human beings make are both useful and dangerous. Fire, the automobile, a chainsaw. We generally don't hand those out to people without some sort of training or warning. We regulate their use. But the law and human society are still catching up here.
LLMs are useful in the right hands, very much so. But they need a wrapper preventing children, the gullible, and apparently lawyers from diving in without some warnings. You simply can't trust the output the same way you'd trust, say, a teacher of the subject.
To be scrupulously fair, the inability of Korob and Sylvia to render a self-consistent and suitably fear-inducing environment, because they don't understand humans, is a significant feature of the episode.
So you could argue that this is clever thematic alignment, and definitely not a random nominally-spooky prop rushed in front of the camera because it was easy.
🛒 Remember the iconic Kmart in-store audio that set the holiday shopping vibe? 🎄🛍️ Travel back to December 1990 & relive the holiday magic with these preserved recordings: https://archive.org/details/KmartDecember1990 #ThrowbackThursday
#DOScember tests with MSDOS NETDRIVE, which assigns a new drive letter to a remote LAN or internet hard disk/diskette volume; accessing the tool's author test remote volume. I grabbed a bunch of his favorite utilities and ran programs like it was nothing. https://www.vogons.org/viewtopic.php?f=5&t=97743
More effective - and solving more problems - would be to outlaw cybercoin schemes entirely. They're the enablers of ransomware and many other scams, and add zero real value. Once the bad guys no longer have a safe way to transfer large sums, the activity becomes far riskier and less profitable, and they'll target regimes where the funny money is still legal. I am frankly surprised it hasn't already happened given the abuse we see daily.
If you use #Dropbox there are two toggles that you have to check/toggle in order to avoid your private data being sold or being used to train AI models:
On the Desktop website:
1) Go to Help and then Cookies & CCPA preferences to enable "do not sell or share my information"
2) And then next, go to Settings -> Third Party AI, to disable sharing your data for AI training.
Do you want to block threads from your account?
Would you eat a plastic milk jug ring than see a Facebook minion meme on your timeline?
Well oh boy do I have a treat for you! If you're like me and have at least three different Mastodon accounts (or even just one!), you can dom-block them quick fast and in a hurry...
Sᴛᴇᴘ 1: Login to your account. 🖼️¹
Sᴛᴇᴘ 2: Go to Settings, then Development 🖼️²
Sᴛᴇᴘ 3: Create a New application with at least write or write:blocks access. 🖼️³
Sᴛᴇᴘ 4: After you saved, click on the name of your application and copy Your access token 🖼️⁴
Sᴛᴇᴘ 5: Repeat for all your accounts, then run the following shell script...
readarray -t pw <<EOF
infosec.exchange Your-Api-Key1
mastodon.social Your-Api-Key2
defcon.social Your-Api-Key3
EOF
masdablock() {
local i c u p
while [ -n "$1" ]; do
for ((i=0; i<${#pw[@]}; i++)); do
read -ra p <<<"${pw[i]}"
u="$p/api/v1/domain_blocks"
u="https://${u}?domain=$1"
c="Authorization: Bearer ${p[1]}"
curl -H "$c" -X "POST" "$u"
done; shift; echo
done;
}
masdablock threads.net
Mastodon API documentation relating to accounts can be found here:
What I actually want is to sideload with a minimal interpretive dance that doesn't include an apple dev account and limits on how many apps I can sideload.
fave new QOTD, via the comments on https://www.tbray.org/ongoing/When/202x/2022/11/07/Just-Dont
"For every complex problem there is an answer that is clear, simple, and wrong." -- H.L. Mencken
so apparently my preferred prog rock is "Flintstones chewable" level - and I'm surprisingly ok with that.
Crazy idea - the pre-installed system python should come out-of-the-box with one default pre-installed virtual environment, activated on user login. Make it a default. That way you can't accidentally screw up the system python, but can still just install modules. The python distro should *just do the right thing* and not count on the end-user doing the right thing after the fact.
Water. I drink water. Lots of it. With ice.
Father of 4, Lasers and Computers and Physics, Oh My! Soon to be a major motion picture. My Pokemons, let me show them to you.