Show more

Zusammenhalten statt spalten: Der CCC ruft mit hunderten Erstunterzeichnern auf, bei der Demo #unteilbar am 13. Oktober in Berlin Farbe zu bekennen unteilbar.org

Biete WG-Zimmer/ Wohnmöglichkeit mit Terrasse und Gartennutzung in Nordschland für Menschen mit geringen Einkommen aus #Sachsen, die weg wollen/müssen bevor da ab 2019 die Nazipartei (mit)regiert.
LGBT+ & PoC bevorzugt. Kinder sind <3lich willkommen. Tiere gehen auch.
Mail: inkorrupt (at) posteo (dot) de
#säxit4all #ltwsn #chemnitz

A Python question regarding large file transfers over HTTP 

I'm working on a project that involves retrieving large (~2-8 GB) .zip files through HTTP and storing them for later processing. I've written a script that uses an API to lookup and generate URLs for a series of needed files, and then attempts to stream each file to storage using requests.get().iter_content.

The problem is, my connection isn't perfectly stable (and I'm running this on a laptop which sometimes goes to sleep). When the connection is interrupted, the transfer dies and I need to restart it.

What would be the best way to add a resume capacity to my file transfer? So that if the script stalls or the connection drops, it would be possible to resume the download from where it failed?

Is anyone of you still using the and if so how do you access it? Provider? Google groups?

Immer wenn ich Leute treffe, die Linux für schwer installierbar halten und Windows so toll einfach, dann fange ich an, über die manuelle Konfiguration von AUTOEXEC.BAT und CONFIG.SYS unter Windows 3.11 zu schwärmen und merke dann plötzlich an den unwissenden Gesichtsausdrücken, wie alt ich doch bin. ;)

Hi everyone, I am new to Mastodon, so I'm still not very familiar with the whole thing, but I smell very good premises!

I am a (soon to be) Physics PhD student, and I'm interested to links Statistical Mechanics and biological evolution.
Other than this, I like to read (and sometimes write) about the links between technological advancement, such as machine learning or social networks and changes in our society and politics!

@IntegralDuChemin It has relatively complete document compare to OpenBSD.

Fediverse Health Indicator (FHI) =

Number of instances / Number of accounts

Range 0 – 1. The closer to one, the better.

HT @IntegralDuChemin #FHI

@aral @IntegralDuChemin I think small instances are better than instances of one. Makes moderation easier than a free-for-all and minimizes administration costs. I'm talking hundreds or low thousands.

If there is a for Mac OS X can it also run under FreeBSD? On what conditions would that depend?

Is there a way to gain access to the usenet free and anonymous?

part 2

I love spending time in the internet reading about strange topics. Right now it's a lot of time reading about old operating systems from the and as well as their use and their . It's just interesting that every user still uses programs written in the 70s and 80s like bash or the X window system

I like the writing style of its authors (well concerning the linked book) and wonder whether the cited bugs are still a problem for Linux.

Show thread

Interviewer: What's your biggest strength?

Me: I'm an expert in machine learning.

Interviewer: What's 9 + 10?

Me: Its 3.

Interviewer: Not even close. It's 19.

Me: It's 16.

Interviewer: Wrong. Its still 19.

Me: It's 18.

Interviewer: No, it's 19.

Me: it's 19.

Interviewer: You're hired!

For all the and among you - I highly recommend textfiles.com - a site containing thousands of files from the old times of BBS but some interesting things from the modern days as well.
For example try this - the Unix Haters Handbook - quite fun to read.

pdf.textfiles.com/books/ugh.pd

I am looking for a way to search a file for a specific pattern '.00' and want to calculate how often it appears. grep -o '.00' file does this quite well. Unfortunately I do not know how to exclude any pattern which contains the pattern I want to have but is a longer string.

What I want to have :

number of times, '.00' appears in the file

What I want to exclude :

longer patterns like '0.000001' so everything '*.00*'

Does anybody have an idea? Could I do something like "grep -o ('.00' and not '*.00*') ?

Thanks in advance, advice and boosting appreciated.

Hey, #Keybase users: If you'd like to see Mastodon support in Keybase (which is just *perfect* in a federated network to verify people) and have a GitHub account: make sure to voice your wish, +1ing this comment on the related issue. That might make it happen!

github.com/keybase/keybase-iss

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.