No more SPDY protocol support from Google Chrome.
What is SPDY protocol – SPDY was introduced by Google in 2009, aiming to make web browsing much faster and more secure than sites runnung with HTTP(hypertext transfer protocol) . HTTP has been the standard networking protocol that powerd the web since it came tom be, but according to google R&D department this protocol is less secure and slow.
But time is changed because Internet Engineering Task Force has been working to update version of 1.1 of HTTP to new version of the protocol called HTTP/2 . As the information is leaked by organisation which ensure that standard of HTTP/2, Google announced no more support to SPDY.
Reason behind introducing SPDY protocol in 2009 – Google calls SPDY protocol as experimental protocol for a faster web to help reduce the latency of web pages. an application – layer protocol for transporting content over the web, designed specifically for minimal latency. In addition to a specification of the protocol, Google have developed a SPDY-enabled Google chrome browser and open-source web server. In lab tests, google have compared the performance of these feedback, code, and test results to make SPDY the next – generation application protol for a faster web.
Old technical stuffs : Web protocols and web latency
since long back , HTTP and TCP are the protocols of the web. TCP is the generic,reliable transport protocol, providing guaranteed delivery, duplicate suppression, in-order delivery, flow control, congestion avoidance and other transport features. HTTP is the application level protocol providing basic request/response semantics. While we believe that there may be opportunities to improve latency at the transport layer, our initial investigations have focussed on the application layer, HTTP.
Unfortunately, HTTP was not particularly designed for latency. Furthermore, the web pages transmitted today are significantly different from web pages 10 years ago and demand improvements to HTTP that could not have been anticipated when HTTP was developed. The following are some of the features of HTTP that inhibit optimal performance:
Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.
Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.
Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB. As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.
Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.
Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.
Goals for SPDY
The SPDY project defines and implements an application-layer protocol for the web which greatly reduces latency. The high-level goals for SPDY are:
To target a 50% reduction in page load time. Our preliminary results have come close to this target (see below).
To minimize deployment complexity. SPDY uses TCP as the underlying transport layer, so requires no changes to existing networking infrastructure.
To avoid the need for any changes to content by website authors. The only changes required to support SPDY are in the client user agent and web server applications.
To bring together like-minded parties interested in exploring protocols as a way of solving the latency problem. We hope to develop this new protocol in partnership with the open-source community and industry specialists.
Some specific technical goals are:
To allow many concurrent HTTP requests to run across a single TCP session.
To reduce the bandwidth currently used by HTTP by compressing headers and eliminating unnecessary headers.
To define a protocol that is easy to implement and server-efficient. We hope to reduce the complexity of HTTP by cutting down on edge cases and defining easily parsed message formats.
To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.
To enable the server to initiate communications with the client and push data to the client whenever possible.
HTTP 2.0 Description
This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients.
Goals of HTTP 2
- The working group charter mentions several goals and issues of concern
- Negotiation mechanism that allows clients and servers to elect to use HTTP 1.1, 2.0, or potentially other non-HTTP protocols.
- Maintain high-level compatibility with HTTP 1.1 (for example with methods, status codes, and URIs, and most header fields)
- Decrease latency to improve page load speed in web browsers by considering:
Data compression of HTTP headers
Server push technologies
Fixing the head-of-line blocking problem in HTTP 1
Loading page elements in parallel over a single TCP connection
- Support common existing use cases of HTTP, such as desktop web browsers, mobile web browsers, web APIs, web servers at various scales, proxy servers, reverse proxy servers, firewalls, and content delivery networks
Wikipedia and Google Blogs