Content Delivery Network

 In Blog

Content-Delivery-Network-900x582

A content delivery network or content distribution network (CDN) is a large distributed system of servers deployed in multiple data centers across the Internet. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.

Content providers such as media companies and e-commerce vendors pay CDN operators to deliver their content to their audience of end-users. In turn, a CDN pays ISPs, carriers, and network operators for hosting its servers in their data centers. Besides better performance and availability, CDNs also offload the traffic served directly from the content provider’s origin infrastructure, resulting in possible cost savings for the content provider. In addition, CDNs provide the content provider a degree of protection from DoS attacks by using their large distributed server infrastructure to absorb the attack traffic. While most early CDNs served content using dedicated servers owned and operated by the CDN, there is a recent trend to use a hybrid model that uses P2P technology. In the hybrid model, content is served using both dedicated servers and other peer-user-owned computers as applicable.

In a CDN, content exists as multiple copies on strategically dispersed servers. A large CDN can have thousands of servers around the globe, making it possible for the provider to send the same content to many requesting client computing devices efficiently and reliably — even when bandwidth is limited or there are sudden spikes in demand. CDNs are especially well suited for delivering streaming audio, video, and Internet television (IPTV) programming, although an Internet service provider (ISP) may also use one to deliver static or dynamic Web pages.

CDN management software dynamically calculates which server is located nearest to the requesting client and delivers content based on those calculations. This not only eliminates the distance that content travels, but also reduces the number of hops a data packet must make. The result is less packet loss, optimized bandwidth and faster performance which minimizes time-outs, latency and jitter, while improving overall user experience (UX). In the event of an Internet attack or malfunction at a junction of the Internet, content that’s hosted on a CDN server will remain available to at least some users.

Operation

Most CDNs are operated as an application service provider (ASP) on the Internet (also known as on-demand software or software as a service). An increasing number of Internet network owners have built their own CDNs to improve on-net content delivery, reduce demand on their own telecommunications infrastructure, and to generate revenues from content customers. This might include offering access to media streaming to internet service subscribers. Some larger software companies such as Microsoft build their own CDNs in tandem with their own products. Examples include Microsoft Azure CDN and Amazon CloudFront.

Here content (potentially multiple copies) may exist on several servers. When a user makes a request to a CDN hostname, DNS will resolve to an optimized server (based on location, availability, cost, and other metrics) and that server will handle the request.

network-map

Technology

CDN nodes are usually deployed in multiple locations, often over multiple backbones. Benefits include reducing bandwidth costs, improving page load times, or increasing global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs.

Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops, the least number of network seconds away from the requesting client, or the highest availability in terms of server performance (both current and historical), so as to optimize delivery across local networks. When optimizing for cost, locations that are least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, as servers that are close to the end-user at the edge of the network may have an advantage in performance or cost.

Content networking techniques

The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result the core network is specialized, simplified, and optimized to only forward data packets.

Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services. These techniques are briefly described below.

Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache.

Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based, i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.

A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network.

Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting, and anycasting. Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring.

Different types of CDN service

  • Google Page Speed Services
  • CloudFlare
  • CoralCDN
  • Incapsula etc

 Reference

Erik Nygren, Ramesh K. Sitaraman, and Jennifer Sun. “The Akamai Network: A Platform for High-Performance Internet Applications, ACM SIGOPS Operating Systems Review, vol. 44, no. 3, July 2010.”.
“Akamai goes P2P, buys Red Swoosh. GigaOM.”. April 2, 2007. Retrieved March 16, 2012.
http://azure.microsoft.com/en-us/
http://aws.amazon.com/cloudfront/
Saltzer, J. H., Reed, D. P., Clark, D. D.: “End-to-End Arguments in System Design,” ACM Transactions on Communications, 2(4), 1984
to: a b Hofmann, Markus; Leland R. Beaumont (2005). Content Networking: Architecture, Protocols, and Practice. Morgan Kaufmann Publisher. ISBN 1-55860-834-6.
RFC 3568 Barbir, A., Cain, B., Nair, R., Spatscheck, O.: “Known Content Network (CN) Request-Routing Mechanisms,” July 2003
RFC 1546 Partridge, C., Mendez, T., Milliken, W.: “Host Anycasting Services,” November 1993.
RFC 3507 Elson, J., Cerpa, A.: “Internet Content Adaptation Protocol (ICAP),” April 2003.
ICAP Forum
RFC 3835 Barbir, A., Penno, R., Chen, R., Hofmann, M., and Orman, H.: “An Architecture for Open Pluggable Edge Services (OPES),” August 2004.
www.wikipedia.com

Recent Posts

Leave a Comment

Start typing and press Enter to search