I've got a hobby-interest in remote sensing (satellite imagery). Over the past couple of days, I've been playing around with data from the ESA's Sentinel-1 mission. The ESA (being cool and European Union-y) makes most of the data from Sentinel series of satellites freely accessible to the public, and provides some decent software for processing and analysing the data.
Sentinel-1 is a synthetic aperture radar (SAR) satellite. I don't fully understand the physics behind SAR, but it's basically an active radar measurement of the ground track the satellite passes over. Different surfaces give different sorts of radar returns (measured as a change in polarisation), and so SAR can be used to classify different terrains (crops, forests, grasslands, rock, etc), like in the false-colour image of Flevoland I've attached. Resolution is moderate: for Sentinel-1, each pixel ends up being about 4x4 m on the ground.
SAR imagery does not have amazing spatial resolution, but is often good enough to do things like identify shipping. Water is a uniquely flat surface, so metal objects floating on water give a good return against a low background signal*. Some computationally demanding image processing later, and you can pick out ship locations. I've got a vague idea that it could be interesting to find ships in the territorial waters of North Korea, correlate against AIS tracks, and try to find some sanction-busting shipping running dark without AIS.
*This makes me wonder: the USSR really struggled with power requirements for the radar on its RORSAT ocean-monitoring satellites, to the point that it ended up having to power them using the only nuclear reactors to be launched into space. Why is SAR so much more efficient?
Anyway, that's all background. Today I spent a few hours trying to set up python scripts to find the most recent Sentinel passes over a given geographical location and download the associated imagery products. This would almost certainly be trivial for anyone with a proper computing background, but for this procrastinating chemist the steps involved learning how to:
• Find the relevant API and options: ✔️, not too bad
• Make an authenticated HTTP call to the constructed URI ✔️
• Parse the XML returned into a pandas dataframe containing imaging modes/acquisition time and date/unique ID for each frame containing the specified lat/lon location ✔️
• Download each ~1.8 GB image into a folder for processing. Still in progress: I've found out how to make an authenticated http GET request that *should* be streaming the retrieved file to storage, but the connection seems unstable and I can't get the download to finish. Sorting this out is the next thing I need to do: when I've got a way to download and archive the specified files, it'll be time start looking into automated processing and analysis. Still not even really sure what I'm trying to achieve, but I'm having fun and learning stuff so far!
@spinflip If you have any questions about processing that data, hit me up! I might be able to give some pointers.
Are you already familiar with opencv?
@pkok opencv? Not at all. What is it?
And I may well do that! Is your experience in imagery, SAR, python in general, SNAPPy in particular?..
@pkok ah, I think I might actually have heard of that! Right now I'm still very much focused on data collection + processing. Image recognition would be cool once I've started to stack processed TIFFs up.
I'm going to be away from my (shitty, struggling) computer this weekend, but I might have a few python Qs for you next week. My experience so far has all been in data analysis, and the whole http/API/file management sphere is completely new to me
@spinflip Yeah, I'm not big on that part as well. Hit me up whenever you like!