Show newer

On December 1 at 15:00 UTC, as part of #PyData Global 2022
2022, I am leading a tutorial on #Bayesian Decision Analysis.

Learn more and register here: buff.ly/3gDgFLh

PyData Global uses pay-what-you-can pricing, with donations based on location, so it is accessible to all!

@jacob @ambv FWIW I've been using KeepassXC (and before that KeepassX and Keepass) for over a decade. I sync it to all my devices over WAN-only using syncthing, but it would be pretty easy to sync using any other file syncing service. Very happy with it, and I'm also happy with Keepass2Android Offline as well.

@mattjohnsonpint Yeah, Hynek figured it out in a parallel reply: mastodon.social/@hynek/1094342

So now the Windows + Python users of northern Mexico can have more accurate time zones today. If they update their Python packages. 😅

@jugmac00 @hynek Ah, there we go, that was very unclear to me! Thanks!

Anyone have a good example of setting up GitHub Actions for old, out-of-support versions of Python (e.g. 2.7, 3.6)?

Inspired by @mkennedy, and the work I'm doing on profiling for Python data processing jobs, some initial scattered thoughts on how performance differs between web applications and data processing, and why they therefore require different tools.

1. Web sites are latency-focused. Web applications typically require very low latency (milliseconds!) from a _user_ perspective. Throughput matters from website operator perspective, but it's more about cost.

Just did first pass of fork()-based multiprocessing profiling for Sciagraph (sciagraph.com), a profiler for #Python #datascience pipelines.

First test passed, now to polish it up.

Notes:

1. In case you are not aware, #Python's `multiprocessing` on Linux is BROKEN BY DEFAULT (pythonspeed.com/articles/pytho).
2. As a result, this code is quite evil.
3. I am so so happy I am writing software in #Rust. Writing a robust profiler of this sort in C++ would've been way beyond my abilities.

The release candidate for tox 4 - a complete rewrite of the project - is now out; see tox.wiki/en/rewrite/changelog.. Please try it because if no show-stoppers are reported, will be a stable release on the 6th of December 2022. I'd hate to break your CI, so test it beforehand. 😀Thanks! pypi.org/project/tox/4.0.0rc1/

@btskinn That just seems weird to me. I mainly put all my requirements in `tox`, and occasionally I'll create a virtualenv where I do `pip install -e . && pip install ipython` just to have a little environment where I can play around with my library or whatever.

You can also just, like, not include `.` in your `requirements-dev.txt` file: `pip install -e . && pip install -r requirements-dev.txt`.

@btskinn @simon But why? There is or should be no advantage to using -e in a non interactive install. Its only purpose is to allow you to let the installed code reflect changes to the source without installing, but CI does a fresh install each time.

@simon Why `pip install -e`? That is usually a red flag for me in non-interactive environments, because it suggests you may be relying on some accidental "feature" of the editable install mechanism.

Better to test as your users are expected to install it: blog.ganssle.io/articles/2019/

@brainwane On my instance it's a quote-tweet. I was organizing it as "Interesting details about the project in the thread, then this call for users as a QT-comment-on-the-thread".

If anyone else finds this kind of thing useful, I'd totally love it if someone else started using this project. Particularly if you are the kind of person who is going to make lots of improvements to the front-end and then send me PRs 😉

I keep coming up with interesting improvements for this project, but I only have so much time to work on stuff like this.

Paul Ganssle  
Yesterday I released version 0.6.0 of my audiobook RSS server, audio-feeder: https://github.com/pganssle/audio-feeder It takes your directory of au...

I started this application in December 2016, before I knew anything about databases, so I hacked together a pseudo-DB out of YAML files, because I wanted to be able to edit the files by hand if I screwed up. As this "database" grew, parsing huge YAML files became a bottleneck; I lived with this for years, but recently, I managed to switch over to using a SQLite database!

I lived with this for years, but recently, I managed to switch to a SQLite database!

This was surprisingly easy, because I already had a pseudo-ORM, and I just load the whole "database" into memory at startup, but I am still not using the features of a "real database", since my "queries" are basically Python code iterating over dictionaries and such.

Show thread

I really like the "segmented" feed, which breaks up books along chapter and/or file boundaries, recombining them to minimize total deviation from 60m files. I like to listen to audiobooks in ~60 minute chunks, and this automates the process of chunking them up for me.

The implementation was a rare example where dynamic programming was useful in the wild (and not just in job interviews): github.com/pganssle/audio-feed

Thanks to @njs for suggesting the approach and basically implementing it flawlessly on the first try.

Show thread

I've also created this probably convenient docker-compose repository for (somewhat) easily deploying `audio-feeder`: github.com/pganssle/audio_feed

Now featuring ✨🌟✨*installation instructions*✨🌟✨ (so fancy).

Show thread

Yesterday I released version 0.6.0 of my audiobook RSS server, `audio-feeder`: github.com/pganssle/audio-feed

It takes your directory of audiobooks and generates an RSS feed for each one, so that you can listen to them in your standard podcast listening flow.

I'm particularly happy with the new feature "rendered feeds", which uses `ffmpeg` behind the scenes to generate alternate feeds where the audiobook is broken up along different lines.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.