Show more

Last week I had to spend hours resetting all the tokens I had in because they have elected to use the Play Integrity API to decide whether a device is safe, and that API marks devices using my mobile OS, , as unsafe, despite the fact that it is a *hardened* version of which is actually *more* secure. Aside from the inherent stupidity of this, it's just unacceptable for a 2FA solution to abruptly stop working; you are potentially leaving your users in a very tough position. As a result, I wouldn't touch Authy with a 10 foot pole in the future.

arstechnica.com/gadgets/2024/0

Nick boosted

“I’m only trying to make it to vote for Kamala Harris.”
- 99-year-old Jimmy Carter

He's been in hospice for over a year and wants to live long enough to vote for Kamala Harris. What's your excuse for not voting? #VoteBlue

usatoday.com/story/news/politi

Nick boosted

Google is disabling the most effective web spyware blocker. That tells you everything about the motives of a company that once promised to not be evil. bleepingcomputer.com/news/goog

Nick boosted

Mystery AI Hype Theater 3000 Episode 37: Chatbots Aren't Nurses

buzzsprout.com/2126417/1551797

In which National Nurses United's Michelle Mahon joined me and @alex to explore the enormous gulf between synthetic text extruding machines (with friendly synthetic faces) and actual nursing practice.

Thx to Christie Taylor for production!

Nick boosted

@Tarah i love the job the US Chemical Safety and Hazard Investigation Board is doing on youtube, to communicate about incidents, their chain of causes, applicable regulations and recommendations to avoid similar things happening of youtube.com/user/USCSB
(and they deserve more than their 300K subscribers)

I don't know of other administrations doing anything similar in quality, i'd LOVE something similar about cyber security.

Nick boosted

Wherein Bruce Schneier and I lay the hammer down yet again on the Cyber Safety Review Board's obligation to provide generalizable best practices and true lessons learned for cyber incidents.

"Because the CSRB reports so far have failed to generalize their findings with transparent and thorough research that provides real standards and expectations for the cybersecurity industry, we—policymakers, industry leaders, the U.S. public—find ourselves filling in the gaps. Individual experts are having to provide anecdotal and individualized interpretations of what their investigations might imply for companies simply trying to learn what their actual due care responsibilities are.

It’s as if no one is sure whether boiling your drinking water or nailing a horseshoe up over the door is statistically more likely to decrease the incidence of cholera. Sure, a lot of us think that boiling your water is probably best, but no one is saying that with real science. No one is saying how long you have to boil your water for, or if any water sources more likely to carry illness. And until there are real numbers and general standards, our educated opinions are on an equal footing with horseshoes and hope.

It should not be the job of cybersecurity experts, even us, to generate lessons from CSRB reports based on our own opinions."

defenseone.com/ideas/2024/08/l

Nick boosted

If you are traveling this summer, remember that you can opt out of airport face scans.

Here’s how by Vox: vox.com/future-perfect/360952/

Nick boosted

If Codeberg is trying to "compete" against GitHub and GitLab, why does it refuse to take a look at AI assistants? Apart from infringing on authors' rights and questionable output quality, we think that the current hype wave led by major companies will leave a climate disaster in its wake: disconnect.blog/generative-ai-

Other _sustainable_ (and cheaper!) ways for increasing efficiency in software development exist: In-project communication, powerful automation pipelines and reducing boilerplate.

Nick boosted

I do sometimes fantasize about how hilarious it would be if I had no scruples or ethics and just set out to do the stupidest evolutionary psychology ever, but explicitly in the ways most calculated to engage evopsych. I'd be like "marriage rituals are not observed in the animal kingdom sorry" "you know what lobsters don't do? They don't have careers". The anti J**dan P*et*rson. Whatever that clown says I want to use the same animal to claim the opposite

Nick boosted

I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For many and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle I of harassment described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

Nick boosted

NPR: The trio exchanged handshakes and hugs with President Biden and Vice President Harris at the foot of their plane's stairs and embraced their family members as onlookers cheered.#news #NPR npr.org/2024/08/02/nx-s1-50605

Nick boosted

Earlier this week, a friend in UK civil society told me: "I don't think there's been an election year in recent memory where I knew less about the internet and politics."

That's not an accident. In two weeks, Meta plans to shut down its leading transparency tool called CrowdTangle, a tool used by researchers & journalists for so much reporting about current affairs.

Today @transparenttech released a report highlighting the impacts that Meta's decision will have

independenttechresearch.org/wp

Nick boosted
Nick boosted

OpenAI is regularly citing a content mill that plagiarizes articles to make money running ads next to them instead of linking users to the originals published by the New York Times.

futurism.com/chatgpt-plagiariz

#tech #ai #openai #artificialintelligence #journalism

Nick boosted

Google is forbidding people from using a growing number of apps and services on an objectively far more private and secure OS that's holding up much better against multiple commercial exploit developers:

grapheneos.social/@GrapheneOS/

They're holding back security, not protecting it.

Show thread
Nick boosted
Nick boosted

I didn't intend to tackle this huge subject this week, but I started writing...and couldn't stop!! This week's issue of Ad Astra (video + newsletter) will be all about the ISS deorbit -- why we can't leave it up there as a museum, why we can't bring it down and save some large pieces, why we can't repurpose it, as well as how the deorbit will happen.

Sign up here: adastraspace.com

Nick boosted
Nick boosted

As an archivist, let me say clearly:

Modern copyright law is actively harmful to the preservation and study of our culture.

It harms artists by giving undue power to publishers. It harms artists by limiting remixes. It harms preservation and research efforts.

It's broken. It's bad. It should be massively reformed.

Show thread
Nick boosted
Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.