Abstract | To provide high performance and cope with ever-increasing traffic demand, Content Delivery Network (CDN) providers have started considering the use of multi-tier architectures, including simple caching devices that can augment their server infrastructure, resulting in a massively distributed caching network. These caching devices are usually geographically distributed, although with limited storage space and bandwidth (e.g., set-top boxes), potentially alleviating the servers' load. This paper initiates the joint resource allocation and routing problem underlying such networks while providing at least a minimum bandwidth for each request. We present Tero, a system that maximizes throughput in such scenarios and leverages popularity forecasting to adapt to demand changes quickly. In Tero, the CDN's edge server decides whether to serve each request locally or redirect it to a specific caching device, maximizing overall system throughput by offloading traffic to the device caches. To adjust to the highly dynamic nature of the demand patterns, Tero performs frequent near-future content popularity predictions and makes allocation decisions every few minutes. We model the optimization problem under these constraints and derive optimality properties using a Lagrangian formulation from which we design heuristic algorithms. We evaluate Tero on a synthetic and a real-world large CDN request sequences, on ablation studies, and by comparing with an upper performance bound. Tero can reduce the edge server's throughput and provide sufficient bandwidth to each request, outperforming the competing baselines by up to 44% while being close to the performance of the ideal upper bounds. Also, Tero takes allocation decisions orders of magnitude faster than solving the exact problem. |
---|