Reading this WebP vulnerability report and I got to the words “lossless image compression” and “Huffman encoded Huffman tables” and I am trying to understand what we’re doing here other than paying for exploit developers’ kids’ orthodontia. https://blog.isosceles.com/the-webp-0day/
@matthew_d_green The whole point of compressing images is that your phone receives fewer bytes. If we didn't care how many bytes it receives we'd do exactly what you said with some format equivalent to PNM.
Then we'd have likely picked webp as one of those formats (IIRC it's better than png and git for lossless compression, but please don't trust my imperfect recall).
Alas, many computers are battery-powered now, so lack of optimization for energy usage remains noticeably bad. On the runtime performance side, the will to improve decompression speed grows with increasing available bandwidths (because decompression becomes a larger fraction of the total latency), as long as we continue to compensate for those increases by increasing the amount of data we send in equivalent situations.
@robryk@qoto.org @matthew_d_green@ioc.exchange
many computers are battery-powered now, so lack of optimization for energy usage remains noticeably bad
Yeah…… But on the other side, you don't receive JBIG2 or other strangely-coded pictures on iMessages often, so even if the decoder is slow and power-hungry the overall effect is still negligible, isn't it? 🤔@matthew_d_green BTW you might find https://github.com/google/wuffs interesting.
@robryk The problem isn’t that we compress the images. It’s that we support weird formats *and* require decoding of those weird old formats to be blazing fast. What if we just picked a couple of good formats with fast decoders, and made everything else slower? Computers are fast as hell now.