1 |
FANG X, HODGE B M, BAI L Q, et al. Mean-variance optimization-based energy storage scheduling considering day-ahead and real-time LMP uncertainties. IEEE Trans. on Power Systems, 2018, 33(6): 7292–7295.
|
2 |
ZHONG Q W, BUCKLEY S, VASSALLO A, et al. Energy cost minimization through optimization of EV, home and workplace battery storage. Science China Technological Sciences, 2018, 61(5): 761–773.
|
3 |
WANG X Y, SUN C, WANG R T, et al. Two-stage optimal scheduling strategy for large-scale electric vehicles. IEEE Access, 2020, 8: 13821–13832.
|
4 |
WANG Y Y, JIAO X H. Multi-objective energy management for PHEV using pontryagin’s minimum principle and particle swarm optimization online. Science China Information Sciences, 2021, 64(1): 1–3.
|
5 |
YU D M, BRESSER C. Peak load management based on hybrid power generation and demand response. Energy, 2018, 163: 969–985.
|
6 |
AGHAJANI G R, SHAYANFAR H A, SHAYEGHI H. Demand side management in a smart micro-grid in the presence of renewable generation and demand response. Energy, 2017, 126: 622–637.
|
7 |
MURALITHARAN K, SAKTHIVEL R, SHI R. Multi objective optimization technique for demand side management with load balancing approach in smart grid. Neurocomputing, 2016, 177: 110–119.
|
8 |
DAI B J, WANG R, ZHU K, et al. A demand response scheme in smart grid with clustering of residential customers. Proc. of the IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, 2019: 1–6.
|
9 |
DERAKHSHAN C, SHAYANFAR H A, KAZEMI A. The optimization of demand response programs in smart grids. Energy Policy, 2016, 94: 295–306.
|
10 |
ZHU X J, HAN H T, GAO S, et al. A multi-stage optimization approach for active distribution network scheduling considering coordinated electrical vehicle charging strategy. IEEE Access, 2018, 6: 50117–50130.
|
11 |
MAZIDI M, MONSEF H, SIANO P. Incorporating price-responsive customers in day-ahead scheduling of smart distribution networks. Energy Conversion and Management, 2016, 115: 103–116.
|
12 |
YAN Y, ZHANG C H, LI K, et al. Synergistic optimal operation for a combined cooling, heating and power system with hybrid energy storage. Science China Information Sciences, 2018, 61(11): 110202.
|
13 |
SOROUDI A, SIANO P, KEANE A. Optimal DR and ESS scheduling for distribution losses payments minimization under electricity price uncertainty. IEEE Trans. on Smart Grid, 2015, 7(1): 261–272.
|
14 |
LI S Y, ZHONG S, PEI Z, et al. Multi-objective reconfigurable production line scheduling for smart home appliances. Journal of Systems Engineering and Electronics, 2021, 32(2): 297–317.
|
15 |
SALEHI J, ABDOLAHI A. Optimal scheduling of active distribution networks with penetration of PHEV considering congestion and air pollution using DR program. Sustainable Cities and Society, 2019, 51: 101709.
|
16 |
LI F, SUN B, ZHANG C H. Operation optimization for integrated energy system with energy storage. Science China Information Sciences, 2018, 61(12): 129207.
|
17 |
LEI W H, CUI H, NEMETH T, et al. Deep reinforcement learning-based energy management of hybrid battery systems in electric vehicles. Journal of Energy Storage, 2021, 36: 102355.
|
18 |
WAN Z Q, LI H P, HE H B, et al. Model-free real-time EV charging scheduling based on deep reinforcement learning. IEEE Trans. on Smart Grid, 2018, 10(5): 5246–5257.
|
19 |
LU R Z, HONG S H, YU M M. Demand response for home energy management using reinforcement learning and artificial neural network. IEEE Trans. on Smart Grid, 2019, 10(6): 6629–6639.
|
20 |
CAO J Y, DONG L, XUE L. Load scheduling for an electric water heater with forecasted price using deep reinforcement learning. Proc. of the Chinese Automation Congress, 2020: 2500–2505.
|
21 |
XI L, YU L, XU Y C, et al. A novel multi-agent DDQN-AD method-based distributed strategy for automatic generation control of integrated energy systems. IEEE Trans. on Sustainable Energy, 2019, 11(4): 2417–2426.
|
22 |
LI H P, WAN Z Q, HE H B. A deep reinforcement learning based approach for home energy management system. Proc. of the IEEE Power & Energy Society Innovative Smart Grid Technologies Conference, 2020: 1–5.
|
23 |
XU X, JIA Y W, XU Y, et al. A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Trans. on Smart Grid, 2020, 11(4): 3201–3211.
|
24 |
TSANG N, CAO C, WU S, et al. Autonomous household energy management using deep reinforcement learning. Proc. of the IEEE International Conference on Engineering, Technology and Innovation, 2019. DOI: 10.1109/ICE.2019.8792636.
|
25 |
WANG Y D, LIU H, ZHENF W B, et al. Multi-objective workflow scheduling with deep-q-network-based multi-agent reinforcement learning. IEEE Access, 2019, 7: 39974–39982.
|
26 |
YU L, XIE W W, XIE D, et al. Deep reinforcement learning for smart home energy management. IEEE Internet of Things Journal, 2019, 7(4): 2751–2762.
|
27 |
CHUNG H M, MAHARJAN S, ZHANG Y, et al. Distributed deep reinforcement learning for intelligent load scheduling in residential smart grid. IEEE Trans. on Industrial Informatics, 2020, 17(4): 2752–2763.
|
28 |
LEE S, CHOI D H. Energy management of smart home with home appliances, energy storage system and electric vehicle: a hierarchical deep reinforcement learning approach. Sensors, 2020, 20(7): 2157.
|
29 |
ALFAVERH F, DENAI M, SUN Y C. Demand response strategy based on reinforcement learning and fuzzy reasoning for home energy management. IEEE Access, 2020, 8: 39310–39321.
|
30 |
ZHOU S Y, HU Z J, GU W, et al. Artificial intelligence based smart energy community management: a reinforcement learning approach. CSEE Journal of Power and Energy Systems, 2019, 5(1): 1–10.
|
31 |
LOWE R, WU Y, TAMAR A, et al. Multi-agent actor-critic for mixed cooperative-competitive environments. Proc. of the 31th International Conference on Neural Information Processing Systems, 2017: 6382–6393.
|
32 |
MASSON W, RANCHOD P, KONIDARIS G. Reinforcement learning with parameterized actions. Proc. of the AAAI Conference on Artificial Intelligence, 2016: 1934–1940.
|
33 |
XIONG J C, WANG Q, YANG Z R. Parametrized deep q-networks learning: reinforcement learning with discrete-continuous hybrid action space. https://arxiv.org/abs/1810.06394.
|
34 |
BESTER C J, JAMES S D, KONIDARIS G D. Multi-pass q-networks for deep reinforcement learning with parameterized action spaces. https://arxiv.org/abs/1905.04388.
|
35 |
FU H T, TANG H Y, HAO J Y, et al. Deep multi-agent reinforcement learning with discrete-continuous hybrid action spaces. https://arxiv.org/abs/1903.04959.
|
36 |
BELLMAN R. Dynamic programming. Science, 1966, 153(3731): 34–37.
|
37 |
LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning. https://arxiv.org/abs/1509.02971.
|
38 |
SUTTON R S, MCALLESTER D A, SINGH S P, et al. Policy gradient methods for reinforcement learning with function approximation. Proc. of the 12th International Conference on Neural Information Processing Systems, 1999: 1057–1063.
|
39 |
HAARNOJA T, ZHOU A, HARTIKAINEN K, et al. Soft actor-critic algorithms and applications. https://arxiv.org/abs/1812.05905.
|