Right now, we send web content as-is from our web services (tlwebadm and tlwebaccess). Generally this works fine, but we could reduce the bandwidth used by actively compressing the data as it is transported. Most of the data we send is also stuff that would compress well. Python has zlib support, so that part of the puzzle is already available. Fancier algorithms, such as brotli, are probably difficult to support though. The counter-point is that currently the data sent is not very large. Especially compared to the VNC data sent later. But that could change as we use more JavaScript and SVG images.
Also note that .svgz seems to be a mess and should probably be avoided: https://github.com/w3c/svgwg/issues/701
A quick test with a prototype confirms that compressing data gives very little improvement in page load time, as we have very little data to load. The data does compress well, though, getting at least 50% reduction for most files. It might be more beneficial under extreme circumstances, but are those circumstances too extreme to be useful anyway? The currently dominating time is TLS setup. Which means that bug 6003 is probably the best way to improve load times.