大型复杂构件多机器人移动加工的自律跟踪与协同控制
项目来源
项目主持人
项目受资助机构
立项年度
立项时间
项目编号
研究期限
项目级别
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
项目受资助省
项目结题报告(全文)
1.A Visual Servoing Method based on Point Cloud
- 关键词:
- Visual servoing;Feature matching;Illumination changes;Image data;Point cloud;Real robot;Texture features;Visual servoing methods
- Zhang, Shiyi;Gong, Zeyu;Tao, Bo;Ding, Han
- 《2020 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2020》
- 2020年
- September 28, 2020 - September 29, 2020
- Virtual, Asahikawa, Hokkaido, Japan
- 会议
A visual servoing method based on point cloud is proposed in this paper. This method does not need any image data and simply relies on dense point cloud information, thus is independent of illumination changes. Meanwhile, this method is free of feature matching which is widely employed in traditional visual servoing methods. Experiments on a real robot prove that this proposed method is effective and robust to regular scene as well as the scene in which the object is scarce in texture features. © 2020 IEEE.
...2.JSNet: Joint instance and semantic segmentation of 3D point clouds
- 关键词:
- Semantics;Large dataset;Artificial intelligence;Semantic Segmentation;Back-bone network;Different layers;Discriminative features;Feature fusion;Mean-Shift Clustering;Semantic features;Semantic segmentation;State-of-the-art methods
- Zhao, Lin;Tao, Wenbing
- 《34th AAAI Conference on Artificial Intelligence, AAAI 2020》
- 2020年
- February 7, 2020 - February 12, 2020
- New York, NY, United states
- 会议
In this paper, we propose a novel joint instance and semantic segmentation approach, which is called JSNet, in order to address the instance and semantic segmentation of 3D point clouds simultaneously. Firstly, we build an effective backbone network to extract robust features from the raw point clouds. Secondly, to obtain more discriminative features, a point cloud feature fusion module is proposed to fuse the different layer features of the backbone network. Furthermore, a joint instance semantic segmentation module is developed to transform semantic features into instance embedding space, and then the transformed features are further fused with instance features to facilitate instance segmentation. Meanwhile, this module also aggregates instance features into semantic feature space to promote semantic segmentation. Finally, the instance predictions are generated by applying a simple mean-shift clustering on instance embeddings. As a result, we evaluate the proposed JSNet on a large-scale 3D indoor point cloud dataset S3DIS and a part dataset ShapeNet, and compare it with existing approaches. Experimental results demonstrate our approach outperforms the state-of-the-art method in 3D instance segmentation with a significant improvement in 3D semantic prediction and our method is also beneficial for part segmentation. The source code for this work is available at https://github.com/dlinzhao/JSNet. Copyright 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
...3.SSRNet: Scalable 3D surface reconstruction network
- 关键词:
- Computer vision;Surface reconstruction;Processing;Learning systems;Image reconstruction;3D surface reconstruction;Generalization capability;Geometry processing;Implicit surfaces;Learning-based methods;Reconstruction method;Relative positions;Scalability issue
- Mi, Zhenxing;Luo, Yiming;Tao, Wenbing
- 《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020》
- 2020年
- June 14, 2020 - June 19, 2020
- Virtual, Online, United states
- 会议
Existing learning-based surface reconstruction methods from point clouds are still facing challenges in terms of scalability and preservation of details on large-scale point clouds. In this paper, we propose the SSRNet, a novel scalable learning-based method for surface reconstruction. The proposed SSRNet constructs local geometry-aware features for octree vertices and designs a scalable reconstruction pipeline, which not only greatly enhances the predication accuracy of the relative position between the vertices and the implicit surface facilitating the surface reconstruction quality, but also allows dividing the point cloud and octree vertices and processing different parts in parallel for superior scalability on large-scale point clouds with millions of points. Moreover, SSRNet demonstrates outstanding generalization capability and only needs several surface data for training, much less than other learning-based reconstruction methods, which can effectively avoid overfitting. The trained model of SSRNet on one dataset can be directly used on other datasets with superior performance. Finally, the time consumption with SSRNet on a large-scale point cloud is acceptable and competitive. To our knowledge, the proposed SSRNet is the first to really bring a convincing solution to the scalability issue of the learning-based surface reconstruction methods, and is an important step to make learning-based methods competitive with respect to geometry processing methods on real-world and challenging data. Experiments show that our method achieves a breakthrough in scalability and quality compared with state-of-the-art learning-based methods. © 2020 IEEE.
...4.Multi-scale geometric consistency guided multi-view stereo
- 关键词:
- Computer vision;Stereo image processing;Geometry;Depth Estimation;Depth map estimation;Multi-hypothesis;Multi-view stereo;Multi-views;Region information;State-of-the-art performance;Vision applications
- Xu, Qingshan;Tao, Wenbing
- 《32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019》
- 2019年
- June 16, 2019 - June 20, 2019
- Long Beach, CA, United states
- 会议
In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
...
© 2019 IEEE.
