Journal of Systems Engineering and Electronics ›› 2022, Vol. 33 ›› Issue (4): 771-784.doi: 10.23919/JSEE.2022.000077
• • 下一篇
收稿日期:
2022-02-14
出版日期:
2022-08-30
发布日期:
2022-08-30
Weijian PANG1,2(), Hui LI1(), Xinyi MA1,3(), Hailin ZHANG1()
Received:
2022-02-14
Online:
2022-08-30
Published:
2022-08-30
About author:
Supported by:
. [J]. Journal of Systems Engineering and Electronics, 2022, 33(4): 771-784.
Weijian PANG, Hui LI, Xinyi MA, Hailin ZHANG. A semantic-centered cloud control framework for autonomous unmanned system[J]. Journal of Systems Engineering and Electronics, 2022, 33(4): 771-784.
"
Number | Rule body | Description |
Rule#1 | Uxv(?x)^ isIdle(?x, ?y)^ swrlb: booleanNot(?y, false)-> UxvAvailable(?x) | Infer the availability of vehicles |
Rule#2 | UxvAvailable (?x)^ hasEndurance(?x, ?y)^ swrlb: greaterThan(?y, 80) →UxvEndurance (?x, HIGH) | Duration time inference |
Rule#3 | BaseStation(?bs)^ hasPosition(?bs, ?pos1) ^Uxv(?uxv)^ hasPosition(?uxv, ?pos2)^ hasRemainEndurance (?uxv, ?re)^ swrlb:divide(?pos1, ?pos2, ?dis)^ swrlb:lessThan(?dis, ?re) → Alert(?alert)^ hasUxv(?alert, ? uxv)^ hasAlertType(?alert, INSUFFICIENT_ENDURANCE) | Infer low endurance alert event |
Rule#4 | Uxv(?uxv)^ Entity(?entity)^ inFrontOf(?entity, ?uxv) ^ →Obstacle(?entity) | Infer obstacle type |
Rule#5 | Tree(?tree)^ hasHight(?tree,?hight) ^hasWidth(?tree,?width)^ swrlb:divide(?rate, ?width, ?hight)^ swrlb:greatThan(?rate, 4) →hasState(?tree, FALLEN) | Infer entity state |
Rule#6 | Tree(?tree)^ hasHight(?tree,?hight) ^hasWidth(?tree,?width)^ hasDensity(?tree, ?density) ^ swrlb: multiply (?mass, ?width, ?width, ?hight, ?density) →hasMass(?tree, ?mass) | Estimate the mass of the tree |
Rule#7 | Tree(?tree)^ on(?tree, ?road)^ Road(?road)^ hasWidth(?tree, ?wk)^ hasWidth(?road, ?wd)^swrlb: divide(?wd, 2)^ swrlb:greaterThan(?wk, ?wd) →hasState(?road, BLOCKED) | Infer the road is blocked or not |
Rule#8 | Carrier(?u)^ Wrecker(?w)^ Road(?r)^ Tree(?tree)^ at(?u, ?r)^ hasState(?r, BLOCKED)^ on(?tree, ?road)^ hasMass(?t, ?m)^ hasMaxLift(?w, ?l)^ swrlb:lessThan(?m, ?l)→hasCandidateTask(?w, OBSTACLE_CLEANNING) | Infer obstacle cleaning task |
Rule#9 | Tree(?tree)^ hasMass(?tree, ?mass)^ Uxv(?uxv)^ hasMaxGrip(?uxv, ?grip)^ swrlb:greaterThan (?grip, ?mass) → hasCandidateUxv(?task, ?uxv) | Infer candidate task executor |
Rule#10 | Carrier(?c)^ ForkLift(?fl)^ Package(?p)^ WarHouse(?r)^ at(?c, ?wh)^ hasTargetPakage(?c, ?p)^ hasMass (?p, ?m)^ hasMaxLift(?fl, ?l)^ swrlb:lessThan(?m, ?l)→hasCandidateTask(?w, LIFT_COOPERATION) | Infer lift cooperative task |
Rule#11 | Carrier(?c)^ Road(?r)^ MaintenanceSign(?ms)^ at(?c, ?r)^ hasState(?r, BLOCKED)^ hasSign (?r, ?ms)→hasCandidateTask(?w, ROUTE_REPLANNING) | Replanning path when the road is blocked since maintenance |
Rule#12 | Carrier(?u)^ Wrecker(?w)^ Road(?r)^ Tree(?tree)^ at(?u, ?r)^ hasState(?r, BLOCKED)^ on(?tree, ?road)^ hasMass(?t, ?m)^ hasMaxLift(?w, ?l)^ swrlb:greaterThan(?m, ?l)→hasCandidateTask(?w, TOUTE_REPLANNING) | Replanning path when the road is blocked by an obstacle |
Rule#13 | Carrier(?c)^Customer(?p)^ Address(?a)^ at(?c, ?a)^ hasAddress(?p, ?a→hasCandidateTask(?c, PLACING_PACKAGE) | Infer placing package action |
Rule#14 | Carrier(?c)^ ForkLift(?fl)^ Package(?p)^ WarHouse(?r)^ at(?c, ?wh)^ hasTargetPakage(?c, ?p)^ hasMass(?p, ?m)^ hasMaxLift(?fl, ?l)^ swrlb:greaterThan(?m, ?l)→Alert(?alert)^ hasUxv(?alert, ?c)^ hasAlertType(?alert, LIFT_COOPERATION_FAILED) | Infer alert of forklift cannot lift a package |
"
Item | Description | Action |
Task name | Delivery task with exception event | — |
Precondition | #1, #2 delivery UGV, #3 wrecker UGV, #4 forklift UGV located at the depot, new package arrives | — |
Event flow | UGV completes navigate autonomously | — |
UGV report to the cloud control center | — | |
#1 or #2 UGV conduct delivery task(Task1) | Query for related information: candidate vehicle, package location, destination location | |
Get package(Action 11) | ||
Navigate to destination(Action 12) | ||
Complete delivery(Action 13) | ||
#3 wrecker conduct Obstacle_cleanning task (Task 2) | Query for related information: Obstacle type, location | |
Navigate to the obstacle (Action 21) | ||
Clean up obstacle (Action 22) | ||
Navigate to the depot (Action 23) | ||
#4 forklift conduct lift cooperative task when needed (Task 3) | Query for related information: Package location | |
Navigate to package (Action 31) | ||
Load package (Action 32) | ||
Navigate to the depot (Action 33) | ||
Exception handling | #1 or #2 UGV delivery task failed since the road blocked | Generate scene graph and report to control center |
According to the inference result, generate a new “obstacle cleaning” task (Task 2) | ||
#1 or #2 UGV delivery task failed since road maintenance | Acquire sensor data and update cloud KB information | |
According to the inference result, replanning the path | ||
#1 or #2 UGV arrive at a warehouse | Acquire sensor data and update cloud KB information | |
According to the inference result, generate a new “Lift_Cooperation” task (Task 3) | ||
#1 or #2 UGV arrives at customer address | According to the inference result, unload the package | |
Effect | Each vehicle completes a designed task and reports to the cloud control center | — |
1 | GU Y M Overview of technology development of the USV group. Ship Science and Technology, 2019, 41 (12): 35- 38. |
2 | HU X F, QI D W. On problems of intelligent decision-making—how far is it from game-playing to operational command. Journal of Command and Control, 2020, 6(4): 356−363. (in Chinese) |
3 | TANG J. Professor Tang Jie deep report: the next decade of artificial intelligence. https://zhuanlan.zhihu.com/p/133517890. |
4 | TENORTH M. Knowledge processing for autonomous robots. Munich: Technical University of Munich, 2011. |
5 |
LI X, SONIA B, TAMARA M W, et al SWARMs ontology: a common information model for the cooperation of underwater robots. Sensors, 2017, 17 (3): 569- 588.
doi: 10.3390/s17030569 |
6 | HUANG L, LIANG H W, YU B, et al Ontology-based driving scene modeling, situation assessment and decision making for autonomous vehicles. Proc. of the 4th Asia-Pacific Conference on Intelligent Robot Systems, 2019, 57- 62. |
7 | PIGNATON E, OLSZEWSKA J I, CARBONERA J L, et al. Ontological concepts for information sharing in cloud robotics. Journal of Ambient Intelligence and Humanized Computing, 2020. DOI:10.1007/s12652-020-02150-4. |
8 |
MARQUES P, MANFROI D, DEITOS E, et al An IoT-based smart cities infrastructure architecture applied to a waste management scenario. Ad Hoc Networks, 2019, 87, 200- 208.
doi: 10.1016/j.adhoc.2018.12.009 |
9 |
BRUCKNER D, PICUS C, VELIK R, et al Hierarchical semantic processing architecture for smart sensors in surveillance networks. IEEE Trans. on Industrial Informatics, 2012, 8 (2): 291- 301.
doi: 10.1109/TII.2012.2186142 |
10 |
MENG Z Z, WU Z P, GRAY J A collaboration-oriented M2M messaging mechanism for the collaborative automation between machines in future industrial networks. Sensors, 2017, 17 (11): 2694- 2709.
doi: 10.3390/s17112694 |
11 |
LU Y Q, ASGHAR M R Semantic communications between distributed cyber-physical systems towards collaborative automation for smart manufacturing. Journal of Manufacturing Systems, 2020, 55, 348- 359.
doi: 10.1016/j.jmsy.2020.05.001 |
12 |
XIA Y Q Cloud control systems. IEEE/CAA Journal of Automatica Sinica, 2015, 2 (2): 134- 142.
doi: 10.1109/JAS.2015.7081652 |
13 | YIN L L. Research on uncertain environment cognition for unmanned maritime systems. Harbin: Harbin Engineering University, 2015. (in Chinese) |
14 |
MIGUELANEZ E, PATRON P, BROWN K E, et al Semantic knowledge-based framework to improve the situation awareness of autonomous underwater vehicles. IEEE Trans. on Knowledge and Data Engineering, 2011, 23 (5): 759- 773.
doi: 10.1109/TKDE.2010.46 |
15 | EKVALL S, KRAGIC D Robot learning from demonstration: a task-level planning approach. International Journal of Advanced Robotic Systems, 2008, 5 (3): 223- 234. |
16 | KIRSCH A. Integration of programming and learning in a control language for autonomous robots performing everyday activities. Munich: Technical University of Munich, 2008. |
17 | MOKHTARI V, LIM G H, LOPES L S Gathering and conceptualizing plan-based robot activity experiences. Advances in Intelligent Systems and Computing, 2016, 302, 993- 1005. |
18 | CHEN Z X. Study on approaches of automated planning of service robot for complex environment. Shenzhen: University of Chinese Academy of Sciences, 2019. (in Chinese) |
19 | HASLUM P, LIPOVETZKY N, MAGAZZENI D, et al. An introduction to the planning domain definition language. San Mateo: Morgan Kaufmann Publishers, 2019. |
20 |
MUSEN M A The protégé project: a look back and a look forward. AI Matters, 2015, 1 (4): 4- 12.
doi: 10.1145/2757001.2757003 |
21 | APACHE J. Apache Jena Fuseki. https:// jena. apache. org/ documentation/fuseki2/. |
22 |
JOVANOVIK M, HOMBURG T, SPASIC M Software for the GeoSPARQL compliance benchmark. Software Impacts, 2021, 8, 100071- 100073.
doi: 10.1016/j.simpa.2021.100071 |
23 | TANG K H, NIU Y L, HUANG J Q, et al Unbiased scene graph generation from biased training. Proc. of the Conference on Computer Vision and Pattern Recognition, 2020, 3713- 3722. |
24 | REZAEI M, AZARMI M DeepSOCIAL: social distancing monitoring and infection risk assessment in COVID-19 pandemic. Applied Sciences, 2020, 10 (21): 1- 30. |
25 | SUHAIL M, MITTAL A, SIDDIQUIE B, et al Energy-based learning for scene graph generation. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 13936- 13945. |
26 | GIRSHICK R, DONAHUE J, DARRELL T, et al Rich feature hierarchies for accurate object detection and semantic segmentation. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, 580- 587. |
27 | REN S Q, HE K M, GIRSHICK R, et al Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. on Pattern Analysis & Machine Intelligence, 2017, 39 (6): 1137- 1149. |
28 | REDMON J, DIVVALA S, GIRSHICK R, et al You only look once: unified, real-time object detection. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 779- 788. |
29 | SHAO T, WU D O Graph-LSTM with global attribute for scene graph generation. Journal of Physics: Conference Series, 2003, 2003, 012001- 012007. |
30 | CHUNG J, GULCEHRE C, CHO K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling. https://doi.org/10.48550/arXiv.1412.3555. |
31 | TANG K H, ZHANG H W, WU B Y, et al Learning to compose dynamic tree structures for visual contexts. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 6612- 6621. |
32 | MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space. Proc. of the International Conference on Learning Representations, 2013: 1−13. |
33 | SPEER R, CHIN J, HAVASI C. ConceptNet 5.5: an open multilingual graph of general knowledge. https://doi.org/10.48550/arXiv.1612.03975. |
34 | HE K M, ZHANG X Y, REN S Q, et al Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. on Pattern Analysis & Machine Intelligence, 2014, 37 (9): 1904- 1916. |
35 | REZAEI M, KLETTE R. Computer vision for driver assistance. Cham: Springer International Publishing, 2017. |
36 | BRYAN R, ANTONIO T, KEVIN M, et al LabelMe: a database and web-based tool for image annotation. International Journal of Computer Vision, 2008, 77 (1/3): 157- 173. |
37 |
ZHAI Z Y, MARTINEZ J F, LUCAS N, et al A rule-based reasoner for underwater robots using OWL and SWRL. Sensors, 2018, 18 (10): 3481.
doi: 10.1109/JSEN.2018.2815440 |
No related articles found! |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||