Kissake boosted

Video 3/3 (I feel this one •deeply•, to the bone):

“Developer watching QA test the product”

Show thread
Kissake boosted

@shansterable @chris

I think that's true, but also:

This software underpins a LOT of other software, which means the potential scope of the problem isn't just this one piece of software, but everything that relied on it, which includes (obviously, since it was detected there) OpenSSH, but also tons of other software (like the Linux kernel).

Analysis so far as I've seen (haven't looked hard) points to specifically OpenSSH being targeted, rather than other dependents. Still, everyone who depends on this library (which is a lot more people than just the xz programming community) is going to double-check a lot of stuff because of this find.

Plus, it is going to make folks in the open-source software (OSS) community just a bit more paranoid. It might be unjust if anyone thinks of this as being unique to OSS, since SolarWinds had a not-too-dissimilar issue not too long ago (this is different at least in that we can see exactly when the change was made and what it was, in public)

Kissake boosted

@tech I enjoyed the alt-text (Achilles riding a skateboard?), but it would not help someone understand your message.

If you can update it to read: "A picture of Achilles (representing programmers) and an arrow (representing regex / regular expressions) in his heel (his proverbial one vulnerable spot)", that might be more informative.

I enjoyed the joke; just trying to share it with more people.

Reminds me of: "There are some people who, on discovering a problem, decide to solve it with regexes. Now they have two problems."

Kissake boosted

CEO June Andronick gives an update on the seL4 Foundation at the Summit

@mathaetaes @digibrarian

I hear your concerns, but I feel like you're moving the goalposts and are a little more attached to your objection to this piece of hardware than you are to improving voting.

1) You expressed concern about 2 things that don't apply to this device (a network that doesn't exist and a lack of a paper trail that instead does exist).

2) You haven't acknowledged that those original concerns are unfounded, or alternatively meaningfully challenged my assertions, and also

3) You are raising new concerns (that are also poorly founded, see below).

I feel like this should be my last contribution to this conversation until you show that you're taking my responses seriously.

My response to your concern about the paper trail only being checked in an audit is that A) audits are done all the time [examples: elections.maryland.gov/voting_ and vote.nyc/page/canvass-informat ], and B) it doesn't take much auditing to detect systematic flaws and attacks sufficient to change an election.

For A, the first two states I picked to do a quick search for how votes are counted clearly show that voting machines are picked randomly to be audited. An audit happens every time for those states, one of which is the state this machine is proposed for.

"But what if that machine wasn't the one that was compromised?!" you say? If you're trying to change the election, you need to _both_ have enough vote changes that the result changed, _AND_ a low enough chance of that fact being discovered that there is no hand recount from the audit data.

Combine the fact that elections that are close often require a hand recount automatically with the above random sampling audits, and hopefully you'll either agree that a hand recount of every vote isn't needed, or propose an attack or flaw that is outcome determining AND has a very low risk of being detected.

If you want to advocate for change, how about advocating for risk-limiting audits: [ stat.berkeley.edu/~stark/Prepr ] instead of the fixed sample sizes?

To be clear, I'm not the world's biggest advocate of using computers in voting. I trust computers about as far as I can see their electrons (not very far).

However, I think it is important to acknowledge when someone actually implements a voting system that checks the important boxes, just like you need to stop boycotting a business when they've met your demands.

@mathaetaes @digibrarian

There is still no mention of a network in either the article or the product description, so I'm still not sure where your references to networking come from. I generally agree that having voting machines on a network isn't a good plan, so I'm not arguing in principle, just that your objections to this device don't seem to be based on this device.

For this machine, per the product web page, it actually prints a paper description of the voted ballot in human read-able form that is held behind a pane of glass (so the voter can check it for correctness, but cannot damage or manipulate it directly), so it is apparently auditable.

In what ways does that fail to address your "all digital" concern with this device?

@digibrarian

I read the article and the vendor's web page on the ExpressVoteXL. I don't see anything about a network.

Also, I don't understand your objection to touch screens. If touch screens are easier for those with disabilities to use (and certainly this system seems to offer support for a wide variety of accessibility devices), how is that a bad thing? It seems to me like it expands access to the ballot box beyond those with good eyesight and the dexterity and strength to use a pencil, which seems unnecessarily restrictive.

I don't know that this technology is the best, nor what other challenges it might have, but I don't understand these objections. That doesn't create an obligation in you to explain in more detail, but you (and others) are welcome to do so if you choose.

@tech

The person that put in the 2-post did a decent job (you can see the white cables out the back going into the ceiling from the patch panel(s?); looks fine) except that the vertical cable management has come loose at the top somehow.

I see what looks like one 2U horizontal cable manager in the middle? ... so I feel like they were set up for success mostly.

I'm pretty sure literally everyone after that point didn't "get it" in the slightest possible sense of the phrase.

Entertaining parts:

- A cable label... in the middle of the cable so you aren't distracted by information when unplugging it.

- 2-post mounted devices used as shelves because... they're flat?

- Is that a printer in the back right _on_top_of_an_old_Dell_desktop_!?!?!

On second (fifth?) review, I stand corrected. There are about 5 white cables coming out of the bottom-most patch panel that despite not going through the horizontal guide above, do seem to go mostly horizontal to the nearest vertical guide. Maybe that was intentional?

I feel like I could "fix" it with 2 2-post shelves, 6-7 2U horizontal cable guides, a middling quantity of 3-6 foot CAT6 (for overkill), and about 30 minutes of outage window (actual time required: notably higher). Any bids lower?

Kissake boosted

I'm just a girl, standing in front of the entire infosec community, asking them to give practical, simple digital security/privacy advice to people seeking abortions instead of describing outlandish Jason Bourne scenarios.

@brendan0x5 So, my guess is: Maybe, but it isn't an obvious win.

Here's why:

Part of the work to prove has been to generate a representation (in part, of the portions of the C language that are being used) in another langauge used by the formal prover (Isabelle)[cdn.hackaday.io/files/17139373].

In order to switch to Rust (or whatever flavor of memory safety you like best), you'd need to re-do that work for your language of choice, including being able to prove that the binaries produced by your compiler accurately represent the source code put into the compiler.

Obviously not impossible, but a boatload of work, and if you've already proven correctness, what win are you looking for in making such a change?

Keep in mind that while there are ~10k lines of code (see above whitepaper) and that sounds like a lot, the kernel doesn't do memory management (for that matter, it doesn't do very much at all, thus "micro"), and 10k lines isn't really that much, programming-wise.

All the kernel is in charge of is "minimal mechanisms for controlling access to physical address space, interrupts, and processor time." [docs.sel4.systems/projects/sel], and in particular "All dynamic allocation in the kernel handled via capability system. No dynamic memory allocation in the kernel!" [cl.cam.ac.uk/research/security]

I'll admit that I don't know much about memory safe languages, but if there isn't dynamic memory allocation, and if the kernel doesn't do much anyway (everything else is pushed out to unprivileged isolated user-space), I'm curious what you think they could bring to the table.

For clarity, I'm not being sarcastic; I'm still learning about seL4 and am open to learning about new programming languages.

In particular, I think that a programming language designed around a capability system would be a huge win. Keep in mind that the current default is that every programmer acts as though they have a god given right to every resource the system can offer, as they have learned from history and having admin access to every machine they have ever touched.

Or possibly (the ones that have been converted or that have been exposed to systems where they aren't admins and that don't assume that is a personal affront) that access control based security is the pinacle of security / correctness.

For clarity, I've been both of these people, and they are both adorable positions to be wrong from.

@webklex
If the open source software is distributed by others (e.g. major Linux distros), you can contact security teams for the distributors. Even if they can't contact the vendor either, they can work to mitigate the issue for their users.

It's more work for you, but moves the ball in the right direction. On the bright side, most Linux distro security teams are likely to be accessible and at least mildly on the ball. Your decision though.

@BenMonreal

First, good on you for paying attention and asking, and I don't think it's a dumb question. You're the kind of team member that is an asset to your company's security posture in my opinion.

My recommendation from your perspective as a user (rather than an IT admin) would be butt-covering, combined with providing an opportunity for your org to share info with you / improve your understanding on the topic.

My suggestion is to explicitly raise your concerns to someone who has the authority to direct your behavior (your supervisor, the highest ranking IT person (e.g. CIO, if you have one) and/or the company overall leader (easier for a small company, but regardless of CEO tech ability the IT person should be able to explain their decisions to a CEO in words they can understand). Then ask them to tell you what you should do. This way, when you do it their way, you can point to that direction and say: "I'm doing what I was told" if things go south for any reason.

Something like: "My current expectation is that I should only enter my SSO credentials <in a specific set of ways> to ensure they are not intercepted, captured and/or abused by a third party.

Tools X, Y, and Z (as well as others) violate that expectation by requiring me to enter my credentials in another way: <specify differences>.

Can you either confirm that it is required for me to enter my credentials in this different manner for tools like these or share with me the correct procedure?

Please also share how you expect me to participate in protecting the company from the risk of intercepted SSO credentials in the future (e.g. by alerting you when I observe this, as I am doing now, or some other approach)."

The short answer to your question of how to tell is:

My assumption is that a third-party URL SSO auth is NOT legit, and it is a sign that that third party does not have their security act together.

That said, a business may choose to accept that risk or mitigate it in any number of ways and for a variety of reasons (including that the benefit of having access to the tool balances the added risk)

@freemo @admitsWrongIfProven @neekeeteen @Wolven

There may be some misconception here. I'm pretty sure that the loans that are in question in the court case are loans from the federal government. The only private organizations involved as far as I'm aware are servicing (providing services around the loans like answering questions and collecting payments for the government) the loans. While they would have less business if the loans were forgiven, that isn't the same thing as "paying for it", and they aren't "issuing" the loan.

As far as predatory behavior, there were definitely some predatory "schools" like ITT Tech or or DeVry University; is that what you're referencing?

@alex_02@infosec.exchange @xabean

Soooo... I think I have bad news. I'm not a cryptography expert, but I do have the benefit of knowing more about it than 10 people picked randomly out of a crowd that isn't a cryptography convention (probably like many of you), so there's that.

Here's the thing. Public key cryptography (at least RSA, and I think in general by contrast to symmetric key) is slow, and it is deterministic.

The first isn't a big deal (ish) because computers are fast (you'll find that isn't as true as you'd like the first time you try to encrypt a particularly large file).

The second one will crush you, and is why cryptographers make the big bucks. The person who raised ECB vs. CBC (@xabean) wasn't wrong: see the picture in this wikipedia page to see what will happen to your data when there is too much repetition and you use a deterministic algorithm on it piecemeal: en.wikipedia.org/wiki/Block_ci

If your test for whether cryptography works is whether you recognize the data after it is transformed (hopefully not) and can return the data to its original form (hopefully), there are _lots_ of transformations that will qualify without protecting your information one whit.

I'd say that you'd be better off taking advantage of cryptography software written by experts (e.g. openssl or gpg) for this purpose. It will be faster and more secure, at the cost of being a slightly steeper learning curve than the program you wrote (but a much shallower learning curve than becoming a cryptography expert).

@occirol

I generally like questions that start with "what can possibly go wrong", but to be honest, this one isn't super-scary to me. If this is the reason you have backups at all, I'd consider this a huge win.

The risk I see is that the attacker that disables your anti-malware solution potentially impacts your other defense against malware (e.g. crypto-ransomware). Practically speaking though, you have backups, so ... could have been much worse.

The caveat is that you do still need to manage your backups and protect your backup media, but that's true regardless of your backup tool.

@eobeara

I think the etiquette is: Don't be a jerk. (kind of a baseline for etiquette)

This person felt safe. For a variety of reasons (including your default behavior being: don't mess with people, even if you know how to), it sounds like they were correct to feel safe in this instance.

Unless you have good reason to question this judgement (e.g. you personally successfully intervened to stop another person from taking advantage of them), I would encourage you to at least consider that the risk they took might be reasonable and/or mitigated in ways that were not obvious to you.

Also, consider that making that person feel unsafe may not help them in the short or long term. My ideal world is one where the default is that we feel safe, not the one where everyone is constantly on guard against everyone else. If someone manages to get there before me, that is a win that I want to figure out how to replicate, not a challenge to my choices.

In fairness, I'd like that feeling of safety to be grounded in reality, but it doesn't seem to me that making this person feel unsafe necessarily makes them safer.

Kissake boosted
Kissake boosted

@python_discussions anyone who uses "simply", "just", "all you need to do is..." Has no idea what another person will need to do to solve an issue... Their experience will not match what others will have to do. I've seen it happen too often for that not to be a rule... #programming #infosec #cybersecurity

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.