Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • Opinnäytteet - ylempi korkeakoulututkinto
  • Näytä viite
  •   Etusivu
  • Trepo
  • Opinnäytteet - ylempi korkeakoulututkinto
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Network Energy Saving in 5G-Advanced and Beyond

Jayaweera, Sharada Prabhashwara (2025)

 
Avaa tiedosto
JayaweeraSharadaPrabhashwara.pdf (2.120Mt)
Lataukset: 



Jayaweera, Sharada Prabhashwara
2025

Master's Programme in Computing Sciences and Electrical Engineering
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2025-12-02
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-2025120111156
Tiivistelmä
Energy consumption in 5G networks has emerged as a critical concern, with Base Stations (BS) accounting for a substantial portion of overall network power usage. While various energy-saving techniques have been proposed and studied, cell Discontinuous Transmission and Reception (DTX/DRX) optimization remains relatively under-explored despite its potential for significant Network Energy Saving (NES). This thesis addresses this gap by developing a Deep Reinforcement Learning (DRL) framework to dynamically optimize DTX/DRX ON duration across multiple cells.
A comprehensive System Level Simulator (SLS) was utilized and enhanced with multi-cell capabilities, interference modelling and a dynamic power consumption model. The DRL framework employs a single-agent approach with centralized coordination, enabling comprehensive network visibility and coordinated decisionmaking across cells. The reward function was carefully formulated to balance NES with Quality of Service (QoS) requirements.
Critical insights emerged during agent training, including reward misalignment issues where the agent optimized reward values rather than actual network performance. These challenges were addressed through reward function refinements and hyperparameter tuning. The results demonstrate that the proposed framework achieves considerable energy savings while maintaining QoS through dynamic DTX/DRX parameter optimization. Across diverse deployment configurations ranging from high-traffc to low-traffc scenarios, static power consumption was reduced by more than 50% compared to the Always ON baseline, whilst mean throughput degradation remained below 10%. Furthermore, delay characteristics remained acceptable across all configurations, confirming the viability of the DRL framework.
Kokoelmat
  • Opinnäytteet - ylempi korkeakoulututkinto [41565]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste