大型复杂构件多机器人移动加工的自律跟踪与协同控制

项目来源

国家自然科学基金(NSFC)

项目主持人

陶波

项目受资助机构

华中科技大学

项目编号

91748204

立项年度

2017

立项时间

未公开

研究期限

未知 / 未知

项目级别

国家级

受资助金额

285.00万元

学科

工程与材料科学-机械设计与制造-制造系统与智能化

学科代码

E-E05-E0510

基金类别

重大研究计划-重点支持项目-共融机器人基础理论与关键技术研究

关键词

大型零件 ; 测量 ; 协同控制 ; 机器人加工 ; 工业机器人 ; Large Component ; Industrial Robot ; Robotic Machining ; Cooperative Control ; Measurement

参与者

陶文兵;朱伟东;陈智勇;郭英杰;董辉跃;何文涛;龚泽宇;范奇;褚宏昊

参与机构

浙江大学

项目标书摘要:本项目针对超大型复杂构件高效、自适应加工难题,将移动机器人技术、数字制造技术与人工智能技术相结合,构建移动机器人自适应加工新原理与新装备。项目围绕柔顺末端执行器、双机器人协同控制、高效原位精确测量等开展研究,具体内容包括:(1)移动加工系统力—位双闭环末端执行器设计与控制;(2)复杂约束下多机器人协同运动规划与控制;(3)大尺寸复杂构件局部原位精确测量与整体高效重构;(4)机器人移动加工系统原型构建与试验验证。预期将建立大型复杂构件多机器人移动加工的自律跟踪与协同控制理论,通过研制力—位混合控制的主动柔顺末端执行器,实现机器人装备对零件的主动顺应,并通过双机器人协同控制与高效原位测量技术,实现弱刚性超大型复杂构件高效自适应加工。研究成果对推动我国大型复杂结构件的自动化、智能化制造水平,促进我国机器人化智能制造装备技术发展具有重要意义。

Application Abstract: This project is inspired by the challenge of the efficient and adaptive machining for large-scale components.By integrating mobile robotics,digital manufacturing technology and artificial intelligence technology,this project aims at constructing new principle and new equipment for adaptive machining through mobile manipulators.The project focuses on the research of compliant actuator,dual-robot cooperative control and efficient in-situ accurate measurement,including:(1)Design and control of end effector with force-position double closed-loop in mobile machining system;(2)Multi-robot cooperative motion planning and control under complex constraints;(3)Local in-situ precise measurement and efficient global reconstruction of large-scale complex components;(4)R&D and experimental verification of the robot mobile machining system prototype.On the basis of these research,an autonomic tracking and cooperative control theory for multi-robot mobile machining for large-scale complex components is expected to be established;The active compliance of the robot equipment to the parts will be realized by the force-position hybrid control of the active compliant actuator;Through the dual robot coordination control and efficient in situ measurement technology,it is expected to realize efficient adaptive machining of the large complex component with low-rigidity structure.The outcome of this project will significantly promote the national level in automation and intelligent manufacturing of large-scale complex components,and will facilitate the technology development of robotized intelligent manufacturing equipment.

项目受资助省

湖北省

项目结题报告(全文)

大型复杂构件,如大飞机蒙皮、风力发电转子叶片、高铁车体结构件等,在航空航天、能源和交通等领域有着广泛应用。本项目将机械臂与移动平台相结合,突破了大型复杂构件多机器人协同测量、规划、顺应加工等核心技术,研发了多机器人协同智能加工系统,实现了大型复杂构件的高性能加工。主要成果包括:(1)提出了大型复杂构件多机器人协同加工新原理。通过轨迹运动基元描述方法实现加工轨迹智能描述,提出了基于加工试触学习方法的恒定打磨力面规划,面向薄壁工件加工提出基于阻抗的双机器人模型预测同步控制方法。(2)提出了大型复杂构件移动机器人自律加工新方法。形成了大型风电叶片多机器人移动加工运动规划与控制体系,提出了高质量区域划分方法、高性能工位优化方法、高效碰撞检测方法、高平滑轨迹规划方法,实现机器人自律加工。(3)形成了大型复杂构件高效高精度拼接测量新框架。建立了移动加工机器人测量位姿全局跟踪系统,提出了全局特征变换一致性估计的点云拼接新方法,实现了点云信息与测量位姿高精度同步获取与海量点云高效高精融合。(4)建立了大型复杂性构件机器人加工精度保障新机制。建立了基于振动—加工质量匹配模型的大型复杂构件机器人加工在线监测新方法,分析了机器人移动加工误差传递规律,实现了机器人多工艺质量在线检测与离线评估。(5)研发了大型复杂构件测量—规划—加工一体化机器人加工系统。形成了大型复杂构件的大范围拼接测量—分片轨迹规划—机器人移动加工执行全闭环自律加工新模式。研发了航空关键零部件机器人磨抛系统、高铁白车身机器人磨抛系统、飞机复合材料机器人铺丝系统等机器人自律加工系统,有力支撑了我国航天航空、能源交通等领域的大型复杂构件制造关键技术与核心装备发展。

  • 排序方式:
  • 1
  • /
  • 1.A Visual Servoing Method based on Point Cloud

    • 关键词:
    • Visual servoing;Feature matching;Illumination changes;Image data;Point cloud;Real robot;Texture features;Visual servoing methods
    • Zhang, Shiyi;Gong, Zeyu;Tao, Bo;Ding, Han
    • 《2020 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2020》
    • 2020年
    • September 28, 2020 - September 29, 2020
    • Virtual, Asahikawa, Hokkaido, Japan
    • 会议

    A visual servoing method based on point cloud is proposed in this paper. This method does not need any image data and simply relies on dense point cloud information, thus is independent of illumination changes. Meanwhile, this method is free of feature matching which is widely employed in traditional visual servoing methods. Experiments on a real robot prove that this proposed method is effective and robust to regular scene as well as the scene in which the object is scarce in texture features. © 2020 IEEE.

    ...
  • 2.JSNet: Joint instance and semantic segmentation of 3D point clouds

    • 关键词:
    • Semantics;Large dataset;Artificial intelligence;Semantic Segmentation;Back-bone network;Different layers;Discriminative features;Feature fusion;Mean-Shift Clustering;Semantic features;Semantic segmentation;State-of-the-art methods
    • Zhao, Lin;Tao, Wenbing
    • 《34th AAAI Conference on Artificial Intelligence, AAAI 2020》
    • 2020年
    • February 7, 2020 - February 12, 2020
    • New York, NY, United states
    • 会议

    In this paper, we propose a novel joint instance and semantic segmentation approach, which is called JSNet, in order to address the instance and semantic segmentation of 3D point clouds simultaneously. Firstly, we build an effective backbone network to extract robust features from the raw point clouds. Secondly, to obtain more discriminative features, a point cloud feature fusion module is proposed to fuse the different layer features of the backbone network. Furthermore, a joint instance semantic segmentation module is developed to transform semantic features into instance embedding space, and then the transformed features are further fused with instance features to facilitate instance segmentation. Meanwhile, this module also aggregates instance features into semantic feature space to promote semantic segmentation. Finally, the instance predictions are generated by applying a simple mean-shift clustering on instance embeddings. As a result, we evaluate the proposed JSNet on a large-scale 3D indoor point cloud dataset S3DIS and a part dataset ShapeNet, and compare it with existing approaches. Experimental results demonstrate our approach outperforms the state-of-the-art method in 3D instance segmentation with a significant improvement in 3D semantic prediction and our method is also beneficial for part segmentation. The source code for this work is available at https://github.com/dlinzhao/JSNet. Copyright 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

    ...
  • 3.SSRNet: Scalable 3D surface reconstruction network

    • 关键词:
    • Computer vision;Surface reconstruction;Processing;Learning systems;Image reconstruction;3D surface reconstruction;Generalization capability;Geometry processing;Implicit surfaces;Learning-based methods;Reconstruction method;Relative positions;Scalability issue
    • Mi, Zhenxing;Luo, Yiming;Tao, Wenbing
    • 《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020》
    • 2020年
    • June 14, 2020 - June 19, 2020
    • Virtual, Online, United states
    • 会议

    Existing learning-based surface reconstruction methods from point clouds are still facing challenges in terms of scalability and preservation of details on large-scale point clouds. In this paper, we propose the SSRNet, a novel scalable learning-based method for surface reconstruction. The proposed SSRNet constructs local geometry-aware features for octree vertices and designs a scalable reconstruction pipeline, which not only greatly enhances the predication accuracy of the relative position between the vertices and the implicit surface facilitating the surface reconstruction quality, but also allows dividing the point cloud and octree vertices and processing different parts in parallel for superior scalability on large-scale point clouds with millions of points. Moreover, SSRNet demonstrates outstanding generalization capability and only needs several surface data for training, much less than other learning-based reconstruction methods, which can effectively avoid overfitting. The trained model of SSRNet on one dataset can be directly used on other datasets with superior performance. Finally, the time consumption with SSRNet on a large-scale point cloud is acceptable and competitive. To our knowledge, the proposed SSRNet is the first to really bring a convincing solution to the scalability issue of the learning-based surface reconstruction methods, and is an important step to make learning-based methods competitive with respect to geometry processing methods on real-world and challenging data. Experiments show that our method achieves a breakthrough in scalability and quality compared with state-of-the-art learning-based methods. © 2020 IEEE.

    ...
  • 4.Multi-scale geometric consistency guided multi-view stereo

    • 关键词:
    • Computer vision;Stereo image processing;Geometry;Depth Estimation;Depth map estimation;Multi-hypothesis;Multi-view stereo;Multi-views;Region information;State-of-the-art performance;Vision applications
    • Xu, Qingshan;Tao, Wenbing
    • 《32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019》
    • 2019年
    • June 16, 2019 - June 20, 2019
    • Long Beach, CA, United states
    • 会议

    In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
    © 2019 IEEE.

    ...
  • 排序方式:
  • 1
  • /