@SecurityWriter Between that and the utterly broken keymapping menu, I can't even play that anymore.
@ZachWeinersmith Pretty sure "globalists" has always been a euphemism for "people who don't look like us, and those who support them."
@SecurityWriter One of the nice things about it is that the levels are all like 1-3 hours, so they're really good for single play sessions if you're like me and you just watch to scratch the itch before getting back to work.
Kindly boost for visibility!
A friend worked in printer/plotter service for a long time. He picked up 3D printing, 3D design, and Photoshop stuff as a hobby. He made the designs for these lightsabers for himself, printed them, painted them - There's no metal in this picture.
He's at loose ends, mid-life, and wants to change careers. Big Time movie buff. I would love to get him in touch with some people in special effects/film and make magic happen. #movies #film #design #GetFediHired
@SecurityWriter Have you tried Railgrade? It's not quite as sandbox-y as the rest (there are discrete levels to beat) but the difficulty curve is great and the verticality gives you great options. Replayability is also high.
It had me addicted for a good 100 hours or so.
@Szwendacz I have already spent hours trying, unfortunately. I disabled all plugins and still, even when writing comments, keystrokes lag by full seconds.
It's still the best option at the moment, but it's excruciating.
Fediverse hive mind, please help!
I'm so close to successfully transitioning from #Windows to #Linux, but the one thing holding me back is a working IDE for C#.
#VisualStudio apparently doesn't exist for Linux.
#VSCode does, but it doesn't have multi-monitor support and won't let me use ctrl-click for selection.
#JetBrains #Rider seems to work well, except for the endless lag while typing (sometimes taking 30+ seconds to register each keystroke).
Is there an alternative that works well? I'm on #Fedora 40 currently.
@lauren I've already conceded, long ago, that companies that allow such systems to speak for them should be liable for the results.
But that's very different from a chatbot with a disclaimer.
Nobody is stupid for believing a corporate bot that lies to them about a sale. But they are absolutely stupid if they try to get facts from ChatGPT, ignoring all the disclaimers, and then later rely on those "facts" in a critical situation.
@lauren Who exactly is being forced to use AI?
@lauren
> But my point is that users cannot be expected to understand this difference.
That's some nanny-stateism there. If they're too stupid to understand it, then they shouldn't use it, like cars or kitchen knives or matches. It's not the state's job to ruin things because some people are too stupid to use them properly.
@lauren See, I disagree with that. The negligence is in the user treating an entertainment system like a fact machine. It's every bit as negligent as only getting your news from a comedy program, or consulting Reddit for legal advice.
@interfluidity Yeah, I almost always read that as "go fuck yourself" when it comes from them.
@lauren So long as the provider has a "this might be bullshit" disclaimer, they're not dishing out "facts", so the user is responsible for improperly treating it as such.
@lauren Yeah, I think the search companies putting AI responses as fact at the top of results, especially when the user has not opted in nor acknowledged the danger, could be one of the cases where a company has chosen to use the bot's speech as its own and therefore becomes liable for it.
But, again, I draw the distinction between that conduct and a chatbot with a disclaimer.
@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.
And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.
But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.
@lauren So you believe that the core issue here is that user-prompted content is first-party speech, not third-party speech? Even though the user can ask the system to repeat them verbatim (as I demonstrated above)?
@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.
I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.
Software engineering contractor/consultant in Florida specializing in .NET C# #WebDev, plus #Indie #GameDev in #MonoGame, #Stride, and #Godot.
I like complex simulations and enjoy writing procedural generation algorithms for fun.
#Pilot in training. Burgeoning fan of #Aviation in general.
Fan of #1A jurisprudence and the kind of #FreeSpeech that applies to everyone equally.
Pro-Democracy. Pro-Rights. Pro-Freedom. In that order.
Politically moderate, but a registered Democrat since January 7th 2021.
He/Him 🏳🌈
High risk of rants, especially with the lack of character limit.