@VoidLinux
Couple of things I forgot:
- Allows you to select between glibc and musl c standard lib implementations
- I haven't tried, but it's supposed to work on ARM and other embedded devices
Hey Linux users, give some love to @VoidLinux . An amazing distribution built from scratch.
- Unique xbps package manager.
- LibreSSL instead of OpenSSL.
- Runit instead of Systemd.
- Rolling Release, but unlike Arch, each new package or update requires a successful compilation via Travis on GitHub.
And feel free to ask anything if you are curious ❤️
@AlHunter
Welcome ❤ hope you enjoy your stay and the community
@freemo
Could you add the VoidLinux one too? Just for the sake of having it 😊
@freemo hahah love this one
@indiauplko Come to the dark and beautiful side <3
#QOTO announcement
Hi all we did NOT migrate the servers yesterday but instead just begun that now. It shouldn't effect anything until I redirect the DNS in a few hours. At that point there may be momentary downtime. I will keep everyone informed ont he schedule.
@freemo Doing good thanks!
@mamur I don't know how to see how many we are, but You see people from other domains (instances) because we're all connected and can see each other as a decentralized network. The different menus as I understand, someone correct me if I'm wrong:
Home - a mix between your instance "qoto.org" and the people you follow, which may be in other instances such as mastodon.social
Local timeline - Your instance timeline (qoto.org)
Federated timeline - A hell where all the different instances converge
Does anybody have any idea about how many users this social media platform has? I'm enlisted with QOTO.org but I keep seeing folks with other domain names on this too. Still trying to figure this platform out.
A Python question regarding large file transfers over HTTP
I'm working on a project that involves retrieving large (~2-8 GB) .zip files through HTTP and storing them for later processing. I've written a script that uses an API to lookup and generate URLs for a series of needed files, and then attempts to stream each file to storage using requests.get().iter_content.
The problem is, my connection isn't perfectly stable (and I'm running this on a laptop which sometimes goes to sleep). When the connection is interrupted, the transfer dies and I need to restart it.
What would be the best way to add a resume capacity to my file transfer? So that if the script stalls or the connection drops, it would be possible to resume the download from where it failed?
Is anyone of you still using the #usenet and if so how do you access it? Provider? Google groups?
@Printsi
Welcome! We'll be waiting for those reviews. 😍
Hope you enjoy your stay and the community
@Surasanji
Same, I want to know a little more before buying it but it looks pretty amazing, I particularly loved the city
Software Engineering student.
I really really want a lemonade.