"Graph Federated Learning Based Proactive Content Caching in Edge Computing"
Below podcast on this paper is generated with Google's Illuminate.
https://arxiv.org/abs/2502.04760
The increasing mobile data traffic and video streaming demand efficient content caching in edge computing. Traditional caching methods and existing proactive caching approaches are not effective in predicting content popularity and raise privacy concerns.
This paper introduces Graph Federated Learning based Proactive Content Caching (GFPCC). GFPCC enhances caching accuracy and protects user privacy through federated learning and graph neural networks.
-----
📌 Federated Learning in GFPCC is vital for user privacy. Local training of LightGCN on devices prevents direct data exposure, sharing only model updates for global learning.
📌 Light Graph Convolutional Network (LightGCN) is used for efficient content popularity prediction. It leverages user-item interaction graphs to learn embeddings, enhancing cache hit ratio.
📌 Simplified Graph Neural Network architecture of LightGCN reduces computational overhead. This efficiency makes GFPCC feasible for deployment on resource-constrained edge devices.
----------
Methods Explored in this Paper 🔧:
→ This paper proposes a Graph Federated Learning based Proactive Content Caching (GFPCC) scheme.
→ GFPCC uses a hierarchical architecture. Each user trains a Light Graph Convolutional Network (LightGCN) model locally using their own data.
→ Local training captures user-item relationships to predict content popularity. Users only send trained model parameters to a central server, not raw data.
→ The server aggregates these parameters using federated averaging to refine a global model. This global model selects popular files for proactive caching.
→ LightGCN is used for collaborative filtering. It learns user and item embeddings and their relationships from a user-item interaction graph.
→ LightGCN simplifies Graph Convolutional Networks by using only normalized aggregation of neighbor embeddings and removing non-linear activations, improving computational efficiency.
→ Federated Learning is implemented using q-FedAVG algorithm for global model aggregation to ensure fairness among users.
-----
Key Insights 💡:
→ GFPCC leverages federated learning to maintain user privacy by keeping training data on user devices. Only model updates are shared.
→ The hierarchical architecture of GFPCC allows for global model refinement while utilizing local user data and preferences.
→ Using LightGCN enables more accurate content popularity prediction by effectively capturing user-item relationships in a graph structure.
→ GFPCC addresses the limitations of traditional caching methods and centralized proactive caching by improving prediction accuracy and ensuring user privacy.
-----
Results 📊:
→ GFPCC achieves higher cache efficiency compared to benchmark algorithms like FPCC, m-epsilon-Greedy, Thompson Sampling, and Random on MovieLens datasets.
→ On MovieLens 100K dataset with cache size 400, GFPCC achieves 46.71% cache efficiency, outperforming FPCC (44.49%), m-epsilon-Greedy (34.14%), Thompson Sampling (21.53%), and Random (5.38%).
→ On MovieLens 1M dataset with cache size 400, GFPCC achieves 55.84% cache efficiency, outperforming FPCC (55.04%), m-epsilon-Greedy (40.84%), Thompson Sampling (32.14%), and Random (9.34%).