Title |
Deep-Reinforcement-Learning-Based Sustainable Energy Distribution For Wireless Communication |
ID_Doc |
44680 |
Authors |
Muhammad, G; Hossain, MS |
Title |
Deep-Reinforcement-Learning-Based Sustainable Energy Distribution For Wireless Communication |
Year |
2021 |
Published |
Ieee Wireless Communications, 28, 6 |
DOI |
10.1109/MWC.015.2100177 |
Abstract |
Many countries and organizations have proposed smart city projects to address the exponential growth of the population by promoting and developing a new paradigm for maximizing electricity demand in cities. Since Internet of Things (IoT)-based systems are extensively used in smart cities where huge amounts of data are generated and distributed, it could be challenging to directly capture data from a composite environment and to offer precise control behavior in response. Proper scheduling of numerous energy devices to meet the need of users is a demand of the smart city. Deep reinforcement learning (DRL) is an emerging methodology that can yield successful control behavior for time-variant dynamic systems. This article proposes an efficient DRL-based energy scheduling approach that can effectively distribute the energy devices based on consumption and users' demand. First, a deep neural network classifies the energy devices currently available in a framework. The DRL then efficiently schedules the devices. Edge-cloud-coordinated DRL is shown to reduce the delay and cost of smart grid energy distribution. |
Author Keywords |
|
Index Keywords |
Index Keywords |
Document Type |
Other |
Open Access |
Open Access |
Source |
Science Citation Index Expanded (SCI-EXPANDED) |
EID |
WOS:000745532300017 |
WoS Category |
Computer Science, Hardware & Architecture; Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications |
Research Area |
Computer Science; Engineering; Telecommunications |
PDF |
|