引用本文:尚 立,蔡 硕,崔俊彬,等.基于软件定义网络的电网边缘计算资源分配[J].电力系统保护与控制,2021,49(20):136-143.
SHANG Li,CAI Shuo,CUI Junbin,et al.SDN-based MEC resource allocation of a power grid[J].Power System Protection and Control,2021,49(20):136-143
【打印本页】   【下载PDF全文】   查看/发表评论  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 5071次   下载 1729 本文二维码信息
码上扫一扫!
分享到: 微信 更多
基于软件定义网络的电网边缘计算资源分配
尚 立1,蔡 硕1,崔俊彬1,纪春华1,崔康佳2,李保罡2
(1.国网河北省电力有限公司信息通信分公司,河北 石家庄 050000; 2.华北电力大学电气与电子工程学院,河北 保定 071003)
摘要:
针对电网节点中边缘服务器资源受限问题,提出了一种基于软件定义网络的边缘计算框架。采用深度确定性策略梯度(Deep Deterministic Policy Gradient, DDPG)强化学习算法对边缘服务器中的计算、存储资源进行合理分配。首先建立应用于电网中的基于软件定义网络的边缘计算模型,得到边缘服务器计算存储资源及任务时延的约束条件,分析得到需处理的MINIP问题。使用Tensorflow搭建仿真环境并执行强化学习算法,实现电网边缘节点对边缘服务器存储计算资源的最优利用。结果表明,强化学习算法中的回复值呈上升趋势,采用DDPG进行分配的系统总延时更低。
关键词:  软件定义网络  边缘计算  资源虚拟化  资源分配  强化学习
DOI:DOI: 10.19783/j.cnki.pspc.201623
分类号:
基金项目:国家电网有限公司科技项目资助“国网河北信通公司2020年电力物联网网络架构与边缘终端研究”(SGHEXT OOGCJS2000167);国家自然科学基金项目资助(61971190)
SDN-based MEC resource allocation of a power grid
SHANG Li1, CAI Shuo1, CUI Junbin1, JI Chunhua1, CUI Kangjia2, LI Baogang2
(1. State Grid Hebei Electric Power Co., Ltd. Information and Communication Branch, Shijiazhuang 050000, China; 2. School of Electrical & Electronic Engineering, North China Electric Power University, Baoding 071003, China)
Abstract:
To solve the problem of limited resources of edge servers in grid nodes, an edge computing framework based on a software-defined network is proposed. The Depth Deterministic Strategy Gradient (DDPG) reinforcement learning algorithm is used to allocate the computing and storage resources reasonably in edge servers. First, an edge computing model applied into the software-defined grid network is established. This is to obtain the edge node computing and cache resources and task delay constraints, in order to analyze the MINIP which needs to be solved. TensorFlow is used to build a simulation environment and execute the reinforcement learning algorithm to realize the optimal utilization of edge server storage computing resources by grid edge nodes. It shows that the rewards of reinforcement learning increase, and the total delay of the system using DDPG allocation is lower. This work is supported by the Science and Technology Project of State Grid Corporation of China (No. (SGHEXTOOGCJS2000167) and the National Natural Science Foundation of China (No. 61971190).
Key words:  software defined network  mobile edge computing  resource virtualization  resource allocation  reinforcement learning
  • 1
X关闭
  • 1
X关闭