This is the stuff that "Future" is made of. Here's what Wikipedia has to say about HTTP/2 over HTTP/1.1: > HTTP/2 leaves most of HTTP 1.1's high-level syntax, such as methods, > status codes, header fields, and URIs, the same. The element that is > modified is how the data is framed and transported between the client > and the server. > > Websites that are efficient minimize the number of requests required > to render an entire page by minifying (reducing the amount of code and > packing smaller pieces of code into bundles, without reducing its > ability to function) resources such as images and scripts. However, > minification is not necessarily convenient nor efficient and may still > require separate HTTP connections to get the page and the minified > resources. HTTP/2 allows the server to "push" content, that is, to > respond with data for more queries than the client requested. This > allows the server to supply data it knows a web browser will need to > render a web page, without waiting for the browser to examine the > first response, and without the overhead of an additional request > cycle. > > Additional performance improvements in the first draft of HTTP/2 > (which was a copy of SPDY) come from multiplexing of requests and > responses to avoid the head-of-line blocking problem in HTTP 1 (even > when HTTP pipelining is used), header compression, and prioritization > of requests.
See bug 6003 for HTTP/1.1
Now there's HTTP/3 as well, which is HTTP over QUIC. Unclear if any of these provide any real benefit to us. The problems they solve seem to be focused around use cases where there are lots and lots of connections.
Should probably also look at ALPN which is used to negotiate the protocol already at the TLS handshake.
(In reply to Pierre Ossman from comment #2) > Unclear if any of these provide any real benefit to us. The problems they > solve seem to be focused around use cases where there are lots and lots of > connections. I see two features in QUIC that might be useful even in single-connection scenarios: * They've learned from TCP's mistakes, so they've designed the loss detection and congestion control to be more robust and accurate, which means better performance under bad network conditions * Connection hand-over when switching networks, e.g. when moving from Wi-Fi to mobile A big risk with QUIC, though, is that it is UDP based and might have more issues traversing firewalls as they cannot track connection state reliably.