A Python question regarding large file transfers over HTTP 

I'm working on a project that involves retrieving large (~2-8 GB) .zip files through HTTP and storing them for later processing. I've written a script that uses an API to lookup and generate URLs for a series of needed files, and then attempts to stream each file to storage using requests.get().iter_content.

The problem is, my connection isn't perfectly stable (and I'm running this on a laptop which sometimes goes to sleep). When the connection is interrupted, the transfer dies and I need to restart it.

What would be the best way to add a resume capacity to my file transfer? So that if the script stalls or the connection drops, it would be possible to resume the download from where it failed?

Follow

A Python question regarding large file transfers over HTTP 

@spinflip Hi, I don't have a direct answer to your question; I've never tried to do this before. However the problem makes me think of mosh ( mosh.org/ ) which is an ssh alternative specifically developed for intermittent connections and shoop which is an scp alternative. Perhaps these could be of use if the normal http method turns out to be difficult.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.