Google has suspended Element (@matrix) from the Play Store for "Sexual Content and Profanity". Basically same story as with Subway Tooter a while back. Element is to Matrix as Chrome is to the web. Curiously, Chrome is still on the Play Store.
@timorl If it's an open instance you could also make an account there and try following someone here.
For today's #FollowFriday #FF, here's a bunch of cool (mostly nature) photographers:
@david
@jett1oeil
@hoernchen72
@sohkamyung
@kristapsdz
@bobfisherphoto
@c0c0bird
@IanCykowski
@ete2
@kernpanik
If you are affected by the startWithAudioMuted/startWithVideoMuted bug in Jitsi Meet, the following workaround might be interesting to you:
https://gist.github.com/cketti/f0ed9b722d04618b33a7269d030e1072
@freemo I mean, the crowd did kill two of those people... One deliberately, the other accidentally.
@freemo Looks like one of those deaths was trampled by other rioters, one had a heart attack and another had a stroke.
Hard to blame the police for any deaths except the woman who was shot.
@freemo Yeah, should have been multiprocessing.RLock or whatever specific re-entrant lock you are using. I probably shouldn't have even namespaced it 😛
@freemo Hmm, weird. If there is no solution that allows you to have more than 500 RLocks, maybe you can get a larger array by locking "blocks" of the array for your atomic operations. Then you can have 500 locks spread across 20,000 elements.
Python is always pass by reference, so you can do something like this:
locks = [threading.RLock() for _ in range(500)]
locks = locks * int(math.ceil(len(array) / 500))
random.shuffle(locks)
locks = locks[:len(array)]
Each element is randomly assigned a block of locks, so even if the access pattern is non-random the lock distribution will be. At any given time you have 32 processes contending for 500 locks, which seems like good odds.
@freemo Though TBH I'd kinda love to have a problem that admits an embarrassingly parallel solution as an excuse to write something significant in Rust to test out that "fearless concurrency".
Closest I've come is this: https://gitlab.com/pganssle/metadata-backup
There's a big queue that theoretically could be read in parallel, but I think the file system access ends up blocking, because adding multithreading into the mix doesn't seem to have meaningfully sped anything up.
@freemo Weird. If you are doing so much in parallel and it's a big part of your operation (and you think it's worth it to explore this further) it might make sense to try out using Cython or Numba with a function that releases the GIL, then use multithreading instead of multiprocessing.
Running hundreds of processes and serializing / serializing your data probably creates a ton of overhead.
@freemo And yeah, I am far from thinking Python has the best, most ergonomic parallelism story, but I do think Python scales better than many people give it credit for. 🙂
@freemo Nice! Glad to hear it!
@freemo I don't know of any offhand, sorry. Julien Danjou has an article on it that you've probably seen: https://julien.danjou.info/atomic-lock-free-counters-in-python/
I'm mildly surprised I don't have an easy answer for this honestly. Seems like something that should at least be in toolz or boltons or something.
I admittedly don't do a ton of concurrency stuff that would need this (in Python), though, so there may be something obvious I've missed.
@freemo I'm not sure the GIL-ectomy had come to fruition in time for the deadline, and the 2→3 migration was actually not so bad compared to how it could have been if Python 3 had *also* made extensive changes to the C API.
From what I can tell, the 2 to 3 migration nearly killed Python as a language, and might well have actually killed it if it had been any worse. Hard to Monday morning quarterback on this.
@freemo I am not sure which alternate interpreters don't have a GIL, but I would be shocked if they had the level of compatibility with third party (and private/proprietary) libraries and applications that would be required for upstreaming into CPython.
Another issue is that you may be able to get perfect compatibility at the cost of performance degradation for everything else. Having no GIL but being 30% slower is not a good trade-off for most people, particularly in a mature software ecosystem that evolved with the presence of the GIL.
@freemo Yes and yes.
PyPy, for example, does have a GIL, and has an extensive compatibility layer to support the C API. My understanding is that for a long time PyPy didn't work *at all*, or worked very poorly, when used with anything that uses the C API.
Even now, it's touch-and-go, and many third party libraries aren't tested against PyPy.
@freemo Probably the most promising thing on the horizon for the GIL removal (and many other problems caused by the way the C API works) is HPy: https://github.com/hpyproject/hpy
That still basically involves rewriting all C extensions to use handles instead of manually managed reference counts, and there is very little appetite for another "break the universe" change when people generally have a number of good solutions for this problem already.
@freemo The hard part isn't fixing it for random non-standard interpreters. The hard part is fixing it in the *core* interpreter without breaking all the stuff built on top of it.
Like, it's easy to swap out the tires on your car when the thing is on a jack and not even fully assembled. It's a lot harder to swap them out while driving down the highway at 65 miles per hour while rushing someone to the hospital 😛
Programmer working at Google. Python core developer and general FOSS contributor. I also post some parenting content.