Energy-Efficient Dynamic Enhanced Inter-Cell Interference Coordination Scheme Based on Deep Reinforcement Learning in H-CRAN
2024
Energy-Efficient Dynamic Inter-Cell Interference Coordination Using Deep Learning
publication
Evidence: high
Author Information
Author(s): Choi Hyungwoo, Kim Taehwa, Lee Seungjin, Choi Hoan-Suk, Yoo Namhyun
Primary Institution: Kyungnam University and KAIST
Hypothesis
Can a deep reinforcement learning-based scheme improve energy efficiency and quality of service in heterogeneous cloud radio access networks?
Conclusion
The proposed scheme achieves up to 70% energy savings while enhancing quality of service satisfaction.
Supporting Evidence
- The proposed scheme integrates energy consumption into the optimization process.
- Simulation results demonstrate significant improvements in energy savings and quality of service.
- The approach uniquely incorporates additional parameters like transmission power and CQI thresholds.
Takeaway
This study shows how using smart algorithms can help save energy and improve service in 5G networks.
Methodology
The study uses simulations to evaluate a deep reinforcement learning-based scheme for optimizing interference coordination parameters in H-CRAN.
Digital Object Identifier (DOI)
Want to read the original?
Access the complete publication on the publisher's website