I've also created this probably convenient docker-compose repository for (somewhat) easily deploying `audio-feeder`: https://github.com/pganssle/audio_feeder_docker
Now featuring ✨🌟✨*installation instructions*✨🌟✨ (so fancy).
I started this application in December 2016, before I knew anything about databases, so I hacked together a pseudo-DB out of YAML files, because I wanted to be able to edit the files by hand if I screwed up. As this "database" grew, parsing huge YAML files became a bottleneck; I lived with this for years, but recently, I managed to switch over to using a SQLite database!
I lived with this for years, but recently, I managed to switch to a SQLite database!
This was surprisingly easy, because I already had a pseudo-ORM, and I just load the whole "database" into memory at startup, but I am still not using the features of a "real database", since my "queries" are basically Python code iterating over dictionaries and such.
I really like the "segmented" feed, which breaks up books along chapter and/or file boundaries, recombining them to minimize total deviation from 60m files. I like to listen to audiobooks in ~60 minute chunks, and this automates the process of chunking them up for me.
The implementation was a rare example where dynamic programming was useful in the wild (and not just in job interviews): https://github.com/pganssle/audio-feeder/blob/1a07c8ffa7c7b548471f979382fedb653ce6ee5a/src/audio_feeder/segmenter.py#L45-L102
Thanks to @njs for suggesting the approach and basically implementing it flawlessly on the first try.