Knowledge Agora



Similar Articles

Title UrbanEnQoSPlace: A Deep Reinforcement Learning Model for Service Placement of Real-Time Smart City IoT Applications
ID_Doc 36995
Authors Bansal, M; Chana, I; Clarke, S
Title UrbanEnQoSPlace: A Deep Reinforcement Learning Model for Service Placement of Real-Time Smart City IoT Applications
Year 2023
Published Ieee Transactions On Services Computing, 16.0, 4
Abstract Multi-access Edge Computing (MEC) enables IoT applications to place their services in the edge servers of mobile networks, balancing Quality-of-Service (QoS) and energy-efficiency. Previous works consider compute requirements, while the IoT and latency/bandwidth per-flow communicate requirements are largely ignored. Moreover, the Smart City domain presents unique challenges - modeling the Urban Smart Things (USTs - urban IoT clients), their connectivity with MEC network, diverse resource requirements (compute, communicate, and IoT) of application services, modeling the federation of multiple MEC providers in a city, which we consider in this article. To address these research gaps, we propose: i) UrbanEnQoSMDP - formulation for energy and QoS (latency) optimized service placement for a set of applications in the 'Urban IoT-Federated MEC-Cloud' architecture to satisfy applications' compute, per-flow communicate, and IoT requirements; ii) ' ' epsilon-greedy with mask" policy for apriori satisfaction of IoT requirements by shortlisting suitable USTs; iii) UrbanEnQoSPlace - proposed multi-action Deep Reinforcement Learning (DRL) model, designed from Dueling Deep-Q Network, that uses the proposed policy to solve the UrbanEnQoSMDP for simultaneously placing all services of an application. Extensive simulation results illustrate efficacy and scalability of proposed model against state-of-the-art DRL algorithms (better convergence, higher rewards, lesser runtime; proposed policy w.r.t fewer violations).
PDF

Similar Articles

ID Score Article
41979 Wu, HM; Zhang, ZR; Guan, C; Wolter, K; Xu, MX Collaborate Edge and Cloud Computing With Distributed Deep Learning for Smart City Internet of Things(2020)Ieee Internet Of Things Journal, 7, 9
36398 Priya, B; Malhotra, J Intelligent Multi-connectivity Based Energy-Efficient Framework for Smart City(2023)Journal Of Network And Systems Management, 31, 3
44445 Nassar, A; Yilmaz, Y Deep Reinforcement Learning for Adaptive Network Slicing in 5G for Intelligent Vehicular Systems and Smart Cities(2022)Ieee Internet Of Things Journal, 9, 1
43780 Zhao, L; Wang, JD; Liu, JJ; Kato, N Routing for Crowd Management in Smart Cities: A Deep Reinforcement Learning Perspective(2019)Ieee Communications Magazine, 57, 4
39239 Xu, SY; Liu, QC; Gong, B; Qi, F; Guo, SY; Qiu, XS; Yang, C RJCC: Reinforcement-Learning-Based Joint Communicational-and-Computational Resource Allocation Mechanism for Smart City IoT(2020)Ieee Internet Of Things Journal, 7, 9
41563 Chen, X; Liu, GZ Federated Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Smart Cities in a Mobile Edge Network(2022)Sensors, 22, 13
40221 Mahmood, OA; Abdellah, AR; Muthanna, A; Koucheryavy, A Distributed Edge Computing for Resource Allocation in Smart Cities Based on the IoT(2022)Information, 13, 7
44680 Muhammad, G; Hossain, MS Deep-Reinforcement-Learning-Based Sustainable Energy Distribution For Wireless Communication(2021)Ieee Wireless Communications, 28, 6
44624 Ale, L; Zhang, N; King, SA; Guardiola, J Spatio-temporal Bayesian Learning for Mobile Edge Computing Resource Planning in Smart Cities(2021)Acm Transactions On Internet Technology, 21, 3
39280 Wan, XC Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things(2024)Journal Of Grid Computing, 22, 1
Scroll