港口环境AGV智能感知无轨导航技术

项目来源

国家重点研发计划(NKRD)

项目主持人

刘勇

项目受资助机构

浙江大学

项目编号

2017YFB1302003

立项年度

2017

立项时间

未公开

研究期限

未知 / 未知

项目级别

国家级

受资助金额

168.00万元

学科

智能机器人

学科代码

未公开

基金类别

未公开

关键词

自主导航 ; 港口agv ; 建图 ; autonomous navigation ; AGV ; mapping

参与者

王越;尹欢

参与机构

未公开

项目标书摘要:自主移动机器人或AGV是智能技术的一个重要发展方向。目前,世界制造业的生产模式正在面临着从批量生产向用户定制的转变,对智能、个性化制造有着强劲的需求。拥有自主移动技术的机器人,必将成为未来智能制造中不可替代的重要装备和自动化手段。除了制造业以外,先进移动操作机器人技术还可广泛应用于医院、家政等服务领域,以及港口运输、载人航天、救灾等特殊领域,具有广阔发展前景。在技术层面,智能AGV的最首要任务是解决自定位。在复杂多变的作业环境下,基于GPS的定位技术无法满足导航精度,因此高精度地图是机器人定位技术的必要。本报告本报告将介绍如下几方面的贡献:\t第一章介绍面向三维激光传感器信息的地点重识别问题,该问题解决了地图构建中的闭环检测,减少地图构建的误差累积。重点介绍了基于神经网络的地点重识别方法,并与其它算法进行了对比,本课题的算法在准确性和效率上都获得了提升。\t第二章介绍面向异构地图的定位方法,该方法利用异构传感器信息之间的关联性使视觉能够在激光地图上定位,从而屏蔽视觉传感器在实际环境中存在的大量环境变化因素,从而使基于视觉的长期定位方法成为可能。该算法在室外环境中实现了时间跨度一年的定位。\t第三章为本报告的总结和结论部分,总结本报告所介绍的各方面技术及所取得的效果。

Application Abstract: Autonomous mobile robots or AGVs are an important development direction of intelligent technology.At present,the production mode of the world's manufacturing industry is facing a shift from mass production to customization,and there is a strong demand for intelligent and personalized manufacturing.Robots with autonomous mobile technology will become an irreplaceable important equipment and automation in the future of intelligent manufacturing.In addition to manufacturing,advanced mobile robot technology can also be widely used in hospitals,home economics and other service areas,as well as special areas such as port transportation,manned spaceflight,disaster relief,etc.,with broad development prospects.At the technical level,the most important task of intelligent AGV is to solve self-positioning.In the complex and varied working environment,GPS-based positioning technology can not meet the navigation accuracy,so high-precision map is necessary for robot positioning technology.This report will present the following contributions in this report:The first chapter introduces the location re-identification problem for 3D laser sensor information.This problem solves the closed-loop detection in map construction and reduces the error accumulation of map construction.The method of location re-identification based on neural network is introduced,and compared with other algorithms,the algorithm of this subject has been improved in accuracy and efficiency.The second chapter introduces the location method for heterogeneous maps,which uses the correlation between heterogeneous sensor information to enable visual positioning on the laser map,thereby shielding the large number of environmental changes that the visual sensor has in the actual environment.Make vision-based long-term positioning methods possible.The algorithm achieves a one-year positioning of time span in an outdoor environment.Chapter III is the summary and conclusion part of this report.It summarizes the various technologies and effects achieved in this report.

项目受资助省

浙江省

  • 排序方式:
  • 1
  • /
  • 1.Efficient motion planning based on kinodynamic model for quadruped robots following persons in confined spaces

    • 关键词:
    • Indoor positioning systems;Gait analysis;Robot programming;Mobile robots;Trajectories;Biophysics;Constraint theory;Motion planning;Complex environments;Ground reaction forces;Inequality constraint;On-board sensors;Person following;Quadruped Robots;Receding horizon control;Static and dynamic obstacles
    • Zhang, Zhen;Yan, Jiaqing;Kong, Xin;Zhai, Guangyao;Liu, Yong
    • 《IEEE/ASME Transactions on Mechatronics》
    • 2021年
    • 26卷
    • 4期
    • 期刊

    Quadruped robots have superior terrain adaptability and flexible movement capabilities than traditional robots. In this article, we innovatively apply it in person-following tasks, and propose an efficient motion planning scheme for quadruped robots to generate a flexible and effective trajectory in confined spaces. The method builds a real-time local costmap via onboard sensors, which involves both static and dynamic obstacles. And we exploit a simplified kinodynamic model and formulate the friction pyramids formed by ground reaction forces' inequality constraints to ensure the executable of the optimized trajectory. In addition, we obtain the optimal following trajectory in the costmap completely based on the robot's rectangular footprint description, which ensures that it can walk through the narrow spaces avoiding collision. Finally, a receding horizon control strategy is employed to improve the robustness of motion in complex environments. The proposed motion planning framework is integrated on the quadruped robot JueYing and tested in simulation as well as real scenarios. It shows that the execution success rates in various scenes are all over 90%. © 1996-2012 IEEE.

    ...
  • 2.Efficient motion planning based on kinodynamic model for quadruped robots following persons in confined spaces

    • 关键词:
    • Indoor positioning systems ; Gait analysis ; Robot programming ; Mobile robots ; Trajectories ; Biophysics ; Constraint theory ; Motion planning;Complex environments ; Ground reaction forces ; Inequality constraint ; On;board sensors ; Person following ; Quadruped Robots ; Receding horizon control ; Static and dynamic obstacles
    • ZhangZhen;YanJiaqing;KongXin;ZhaiGuangyao;LiuYong
    • 《IEEE/ASME Transactions on Mechatronics》
    • 2021年
    • 26卷
    • 4期
    • 期刊

    Quadruped robots have superior terrain adaptability and flexible movement capabilities than traditional robots. In this article, we innovatively apply it in person-following tasks, and propose an efficient motion planning scheme for quadruped robots to generate a flexible and effective trajectory in confined spaces. The method builds a real-time local costmap via onboard sensors, which involves both static and dynamic obstacles. And we exploit a simplified kinodynamic model and formulate the friction pyramids formed by ground reaction forces' inequality constraints to ensure the executable of the optimized trajectory. In addition, we obtain the optimal following trajectory in the costmap completely based on the robot's rectangular footprint description, which ensures that it can walk through the narrow spaces avoiding collision. Finally, a receding horizon control strategy is employed to improve the robustness of motion in complex environments. The proposed motion planning framework is integrated on the quadruped robot JueYing and tested in simulation as well as real scenarios. It shows that the execution success rates in various scenes are all over 90%. © 1996-2012 IEEE.

    ...
  • 3.Visual-Inertia Localization With Prior LiDAR Map Constraints

    • 关键词:
    • Sensor fusion; localization; SLAM; visual-based navigation;ODOMETRY; ROBUST
    • Zuo, Xingxing;Geneva, Patrick;Yang, Yulin;Ye, Wenlong;Liu, Yong;Huang, Guoquan
    • 《IEEE ROBOTICS AND AUTOMATION LETTERS》
    • 2019年
    • 4卷
    • 4期
    • 期刊

    In this letter, we develop a low-cost stereo visual-inertial localization system, which leverages efficient multi-state constraint Kalman filter (MSCKF)-based visual-inertial odometry (VIO) while utilizing an a priori LiDAR map to provide bounded-error three-dimensional navigation. Besides the standard sparse visual feature measurements used in VIO, the global registrations of visual semi-dense clouds to the prior LiDAR map are also exploited in a tightly-coupled MSCKF update, thus correcting accumulated drift. This cross-modality constraint between visual and LiDAR pointclouds is particularly addressed. The proposed approach is validated on both Monte Carlo simulations and real-world experiments, showing that LiDAR map constraints between clouds created through different sensing modalities greatly improve the standard VIO and provide bounded-error performance.

    ...
  • 4.ObjectFusion: An object detection and segmentation framework with RGB-D SLAM and convolutional neural networks

    • 关键词:
    • Intelligent Sensing; CNNs; SLAM; Object detection; Segmentation
    • Tian, Guanzhong;Liu, Liang;Ri, JongHyok;Liu, Yong;Sun, Yiran
    • 《NEUROCOMPUTING》
    • 2019年
    • 345卷
    • 期刊

    Given the driving advances on CNNs (Convolutional Neural Networks) [1], deep neural networks being deployed for accurate detection and semantic reconstruction in SLAM (Simultaneous Localization and Mapping) has become a trend. However, as far as we know, almost all existing methods focus on design a specific CNN architecture for single task. In this paper, we propose a novel framework which employs a general object detection CNN to fuse with a SLAM system towards obtaining better performances on both detection and semantic segmentation in 3D space. Our approach first use CNN-based detection network to obtain the 2D object proposals which can be used to establish the local target map. We then use the results estimated from SLAM to update the dynamic global target map based on the local target map obtained by CNNs. Finally, we are able to obtain the detection result for the current frame by projecting the global target map into 2D space. On the other hand, we send the estimation results back to SLAM and update the semantic surfel model in SLAM system. Therefore, we can acquire the segmentation result by projecting the updated 3D surfel model into 2D. Our fusion scheme privileges in object detection and segmentation by integrating with SLAM system to preserve the spatial continuity and temporal consistency. Evaluation performances on four datasets demonstrate the effectiveness and robustness of our method. (C) 2019 Elsevier B.V. All rights reserved.

    ...
  • 5.Active Learning-Based Grasp for Accurate Industrial Manipulation

    • 关键词:
    • Cameras;Convolution;Neural networks;Object detection;Robotics;Accurate grasp;Active Learning;Camera intrinsic parameters;Closed-loop control;Convolutional networks;Generalization ability;Hand-eye transformation;Object detection and localizations
    • Fu, Xiaokuan;Liu, Yong;Wang, Zhilei
    • 《IEEE Transactions on Automation Science and Engineering》
    • 2019年
    • 16卷
    • 4期
    • 期刊

    We propose an active learning-based grasp method for accurate industrial manipulation that combines the high accuracy of geometrically driven grasp methods and the generalization ability of data-driven grasp methods. Our grasp sequence consists of pregrasp stage and grasp stage which integrates the active perception and manipulation. In pregrasp stage, the manipulator actively moves and perceives the object. At each step, given the perception image, a motion is chosen so that the manipulator can adjust to a proper pose to grasp the object. We train a convolutional neural network to estimate the motion and combine the network with a closed-loop control so that the end effector can move to the pregrasp state. In grasp stage, the manipulator executes a fixed motion to complete the grasp task. The fixed motion can be acquired from the demonstration with nonexpert conveniently. Our proposed method does not require the prior knowledge of camera intrinsic parameters, hand-eye transformation, or manually designed feature of objects. Instead, the training data sets containing prior knowledge are collected through interactive perception. The method can be easily transferred to new tasks with a few human interventions and is able to complete high accuracy grasp task with a certain robustness to partial observation condition. In our circuit board grasping tests, we could achieve a grasp accuracy of 0.8 mm and 0.6°. Note to Practitioners-The research in this paper is motivated by the following practical problem. Manipulators on industrial lines can complete high accuracy tasks with hand-crafting features of objects. The perception is only used for object detection and localization. It is not flexible since the prior knowledge differs from tasks, which takes a long time to deploy in a new task. Besides, only well-trained experts are qualified to complete the deployment process. Our grasp method uses a convolutional network to estimate the motion for manipulator directly from images. The camera is mounted on the manipulator and can perceive the object actively. The training data set of the network is specific for different objects that can be automatically collected with a few human interventions. Our method simplifies the deployment process and can be applied in 3C industry (computers, communications, and consumer electronics) where the products upgrade frequently. © 2019 IEEE.

    ...
  • 6.Learning-Based Hand Motion Capture and Understanding in Assembly Process

    • 关键词:
    • Production control;Uncertainty analysis;Assembly;Palmprint recognition;Application level;Assembly line;Computational costs;Detection-based tracking;Embedded computing devices;Localization modeling;Production Planning;temporal action localization
    • Liu, Liang;Liu, Yong;Zhang, Jiangning
    • 《IEEE Transactions on Industrial Electronics》
    • 2019年
    • 66卷
    • 12期
    • 期刊

    Manual assembly is still an essential part in modern manufacturing. Understanding the actual state of the assembly process can not only improve quality control of products, but also collect comprehensive data for production planning and proficiency assessments. Addressing the rising complexity led by the uncertainty in manual assembly, this paper presents an efficient approach to automatically capture and analyze hand operations in the assembly process. In this paper, a detection-based tracking method is introduced to capture trajectories of hand movement from the camera installed in each workstation. Then, the actions in hand trajectories are identified with a novel temporal action localization model. The experimental results have proved that our method reached the application level with high accuracy and a low computational cost. The proposed system is lightweight enough to be quickly set up on an embedded computing device for real-time online inference and on a cloud server for offline analysis as well. © 1982-2012 IEEE.

    ...
  • 7.Accurate and Real-Time 3-D Tracking for the Following Robots by Fusing Vision and Ultrasonar Information

    • 关键词:
    • Extended Kalman filter (EKF); following robot; three-dimensional (3-D)tracking; ultrasonic; visual tracking;PEOPLE TRACKING
    • Wang, Mengmeng;Liu, Yong;Su, Daobilige;Liao, Yufan;Shi, Lei;Xu, Jinhong;Miro, Jaime Valls
    • 《IEEE-ASME TRANSACTIONS ON MECHATRONICS》
    • 2018年
    • 23卷
    • 3期
    • 期刊

    Acquiring the accurate three-dimensional (3-D) position of a target person around a robot provides valuable information that is applicable to a wide range of robotic tasks, especially for promoting the intelligent manufacturing processes of industries. This paper presents a real-time robotic 3-D human tracking system that combines a monocular camera with an ultrasonic sensor by an extended Kalman filter (EKF). The proposed system consists of three submodules: a monocular camera sensor tracking module, an ultrasonic sensor tracking module, and the multisensor fusion algorithm. An improved visual tracking algorithm is presented to provide 2-D partial location estimation. The algorithm is designed to overcome severe occlusions, scale variation, target missing, and achieve robust redetection. The scale accuracy is further enhanced by the estimated 3-D information. An ultrasonic sensor array is employed to provide the range information from the target person to the robot, and time of flight is used for the 2-D partial location estimation. EKF is adopted to sequentially process multiple, heterogeneous measurements arriving in an asynchronous order fromthe vision sensor, and the ultrasonic sensor separately. In the experiments, the proposed tracking system is tested in both a simulation platform and actual mobile robot for various indoor and outdoor scenes. The experimental results show the persuasive performance of the 3-D tracking system in terms of both the accuracy and robustness.

    ...
  • 排序方式:
  • 1
  • /