@pganssle @jugmac00 @hynek Officially announced today as well:
https://github.com/actions/setup-python/issues/544#issuecomment-1332535877
@mattjohnsonpint Yeah, Hynek figured it out in a parallel reply: https://mastodon.social/@hynek/109434203897348469
So now the Windows + Python users of northern Mexico can have more accurate time zones today. If they update their Python packages. 😅
@mattjohnsonpint Hmm... Allegedly you can use it with anything in this list: https://github.com/actions/python-versions/blob/main/versions-manifest.json
But it doesn't seem to be working, at least for 2.7 and 3.6: https://github.com/python/tzdata/actions/runs/3585907944/jobs/6034467045
They aren't very clear about how to specify it in this README: https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md#available-versions-of-python-and-pypy
Inspired by @mkennedy, and the work I'm doing on profiling for Python data processing jobs, some initial scattered thoughts on how performance differs between web applications and data processing, and why they therefore require different tools.
1. Web sites are latency-focused. Web applications typically require very low latency (milliseconds!) from a _user_ perspective. Throughput matters from website operator perspective, but it's more about cost.
Just did first pass of fork()-based multiprocessing profiling for Sciagraph (https://sciagraph.com), a profiler for #Python #datascience pipelines.
First test passed, now to polish it up.
Notes:
1. In case you are not aware, #Python's `multiprocessing` on Linux is BROKEN BY DEFAULT (https://pythonspeed.com/articles/python-multiprocessing/).
2. As a result, this code is quite evil.
3. I am so so happy I am writing software in #Rust. Writing a robust profiler of this sort in C++ would've been way beyond my abilities.
The release candidate for tox 4 - a complete rewrite of the project - is now out; see https://tox.wiki/en/rewrite/changelog.html#v4-0-0rc1-2022-11-29. Please try it because if no show-stoppers are reported, will be a stable release on the 6th of December 2022. I'd hate to break your CI, so test it beforehand. 😀Thanks! https://pypi.org/project/tox/4.0.0rc1/
@btskinn That just seems weird to me. I mainly put all my requirements in `tox`, and occasionally I'll create a virtualenv where I do `pip install -e . && pip install ipython` just to have a little environment where I can play around with my library or whatever.
You can also just, like, not include `.` in your `requirements-dev.txt` file: `pip install -e . && pip install -r requirements-dev.txt`.
@simon Why `pip install -e`? That is usually a red flag for me in non-interactive environments, because it suggests you may be relying on some accidental "feature" of the editable install mechanism.
Better to test as your users are expected to install it: https://blog.ganssle.io/articles/2019/08/test-as-installed.html
@brainwane On my instance it's a quote-tweet. I was organizing it as "Interesting details about the project in the thread, then this call for users as a QT-comment-on-the-thread".
If anyone else finds this kind of thing useful, I'd totally love it if someone else started using this project. Particularly if you are the kind of person who is going to make lots of improvements to the front-end and then send me PRs 😉
I keep coming up with interesting improvements for this project, but I only have so much time to work on stuff like this.
I started this application in December 2016, before I knew anything about databases, so I hacked together a pseudo-DB out of YAML files, because I wanted to be able to edit the files by hand if I screwed up. As this "database" grew, parsing huge YAML files became a bottleneck; I lived with this for years, but recently, I managed to switch over to using a SQLite database!
I lived with this for years, but recently, I managed to switch to a SQLite database!
This was surprisingly easy, because I already had a pseudo-ORM, and I just load the whole "database" into memory at startup, but I am still not using the features of a "real database", since my "queries" are basically Python code iterating over dictionaries and such.
I really like the "segmented" feed, which breaks up books along chapter and/or file boundaries, recombining them to minimize total deviation from 60m files. I like to listen to audiobooks in ~60 minute chunks, and this automates the process of chunking them up for me.
The implementation was a rare example where dynamic programming was useful in the wild (and not just in job interviews): https://github.com/pganssle/audio-feeder/blob/1a07c8ffa7c7b548471f979382fedb653ce6ee5a/src/audio_feeder/segmenter.py#L45-L102
Thanks to @njs for suggesting the approach and basically implementing it flawlessly on the first try.
I've also created this probably convenient docker-compose repository for (somewhat) easily deploying `audio-feeder`: https://github.com/pganssle/audio_feeder_docker
Now featuring ✨🌟✨*installation instructions*✨🌟✨ (so fancy).
Yesterday I released version 0.6.0 of my audiobook RSS server, `audio-feeder`: https://github.com/pganssle/audio-feeder
It takes your directory of audiobooks and generates an RSS feed for each one, so that you can listen to them in your standard podcast listening flow.
I'm particularly happy with the new feature "rendered feeds", which uses `ffmpeg` behind the scenes to generate alternate feeds where the audiobook is broken up along different lines.
@brainwane The library is great, but Null Island is so inconveniently located 😉
Programmer working at Google. Python core developer and general FOSS contributor. I also post some parenting content.