Show newer

@SecurityWriter Have you tried Railgrade? It's not quite as sandbox-y as the rest (there are discrete levels to beat) but the difficulty curve is great and the verticality gives you great options. Replayability is also high.

It had me addicted for a good 100 hours or so.

store.steampowered.com/app/135

@Szwendacz I have already spent hours trying, unfortunately. I disabled all plugins and still, even when writing comments, keystrokes lag by full seconds.
It's still the best option at the moment, but it's excruciating.

@LPerry2 @lauren Yep. One of many reasons I recommend to people to stop using Google.

Check out Kagi if you haven't already. It's paid, but it's great, and it has a free trial.

@penguingeek@mastodon.social It's less about the code and more about the tool windows. I used to have my solution explorer and output and properties and all that on another window so they didn't get in the way of the code. VS Code doesn't seem to allow them to be undocked from the window.

Fediverse hive mind, please help!

I'm so close to successfully transitioning from to , but the one thing holding me back is a working IDE for C#.

apparently doesn't exist for Linux.

does, but it doesn't have multi-monitor support and won't let me use ctrl-click for selection.

seems to work well, except for the endless lag while typing (sometimes taking 30+ seconds to register each keystroke).

Is there an alternative that works well? I'm on 40 currently.

@lauren I've already conceded, long ago, that companies that allow such systems to speak for them should be liable for the results.

But that's very different from a chatbot with a disclaimer.

Nobody is stupid for believing a corporate bot that lies to them about a sale. But they are absolutely stupid if they try to get facts from ChatGPT, ignoring all the disclaimers, and then later rely on those "facts" in a critical situation.

@lauren
> But my point is that users cannot be expected to understand this difference.

That's some nanny-stateism there. If they're too stupid to understand it, then they shouldn't use it, like cars or kitchen knives or matches. It's not the state's job to ruin things because some people are too stupid to use them properly.

@lauren See, I disagree with that. The negligence is in the user treating an entertainment system like a fact machine. It's every bit as negligent as only getting your news from a comedy program, or consulting Reddit for legal advice.

@interfluidity Yeah, I almost always read that as "go fuck yourself" when it comes from them.

@lauren So long as the provider has a "this might be bullshit" disclaimer, they're not dishing out "facts", so the user is responsible for improperly treating it as such.

@lauren Yeah, I think the search companies putting AI responses as fact at the top of results, especially when the user has not opted in nor acknowledged the danger, could be one of the cases where a company has chosen to use the bot's speech as its own and therefore becomes liable for it.

But, again, I draw the distinction between that conduct and a chatbot with a disclaimer.

@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.

And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.

But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.

@lauren So you believe that the core issue here is that user-prompted content is first-party speech, not third-party speech? Even though the user can ask the system to repeat them verbatim (as I demonstrated above)?

@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.

I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.

@cholling @mattcjordan @carnage4life Pretty sure the laws about spam don't actually have a quantity requirement.

Also, "express goal of replacing them" is pretty funny. That's like claiming that McDonalds is expressly trying to put high-end steakhouses out of business.

@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?

@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.