Near Lossless Dense Light Field Compression Using Generalized Neural Radiance Field
项目来源
项目主持人
项目受资助机构
立项年度
立项时间
项目编号
项目级别
研究期限
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
1.Distortion-Resilient DIBR forNovel View Synthesis fromaSingle Image
- 关键词:
- ;Depth image based rendering;Distortion handling;Geometry information;Image inputs;Image-based rendering methods;Novel view synthesis;Real-estates;Single images;Synthesised;Texture information
- Liu, Yuchen;Kamioka, Eiji;Tan, Phan Xuan
- 《13th International Symposium on Information and Communication Technology, SOICT 2024》
- 2025年
- December 13, 2024 - December 15, 2024
- Danang, Viet nam
- 会议
It is challenging to render novel views from a single image input due to inherent ambiguities in the geometry and texture information of the desired scene. As a consequence, existing methods often encounter various types of distortions in synthesized views. To this end, we propose a distortion-resilient Depth-Image-Based Rendering (DIBR) method for synthesizing novel views given a single image input. The proposed method is qualitatively and quantitatively evaluated on the Real-Estate 10K dataset, showing superior results compared to baselines. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
...2.Attention-Based Grasp Detection With Monocular Depth Estimation
- 关键词:
- Computer vision;Deep learning;Intelligent robots;Job analysis;Three dimensional displays;Deep learning;Machine-vision;Point cloud compression;Point-clouds;Pose-estimation;Robot vision systems;Solid modelling;Task analysis;Three-dimensional display
- Xuan Tan, Phan;Hoang, Dinh-Cuong;Nguyen, Anh-Nhat;Nguyen, Van-Thiep;Vu, Van-Duc;Nguyen, Thu-Uyen;Hoang, Ngoc-Anh;Phan, Khanh-Toan;Tran, Duc-Thanh;Vu, Duy-Quang;Ngo, Phuc-Quan;Duong, Quang-Tri;Ho, Ngoc-Trung;Tran, Cong-Trinh;Duong, Van-Hiep;Mai, Anh-Truong
- 《IEEE Access》
- 2024年
- 12卷
- 期
- 期刊
Grasp detection plays a pivotal role in robotic manipulation, allowing robots to interact with and manipulate objects in their surroundings. Traditionally, this has relied on three-dimensional (3D) point cloud data acquired from specialized depth cameras. However, the limited availability of such sensors in real-world scenarios poses a significant challenge. In many practical applications, robots operate in diverse environments where obtaining high-quality 3D point cloud data may be impractical or impossible. This paper introduces an innovative approach to grasp generation using color images, thereby eliminating the need for dedicated depth sensors. Our method capitalizes on advanced deep learning techniques for depth estimation directly from color images. Instead of relying on conventional depth sensors, our approach computes predicted point clouds based on estimated depth images derived directly from Red-Green-Blue (RGB) input data. To our knowledge, this is the first study to explore the use of predicted depth data for grasp detection, moving away from the traditional dependence on depth sensors. The novelty of this work is the development of a fusion module that seamlessly integrates features extracted from RGB images with those inferred from the predicted point clouds. Additionally, we adapt a voting mechanism from our previous work (VoteGrasp) to enhance robustness to occlusion and generate collision-free grasps. Experimental evaluations conducted on standard datasets validate the effectiveness of our approach, demonstrating its superior performance in generating grasp configurations compared to existing methods. With our proposed method, we achieved a significant 4% improvement in average precision compared to state-of-the-art grasp detection methods. Furthermore, our method demonstrates promising practical viability through real robot grasping experiments, achieving an impressive 84% success rate. © 2013 IEEE.
...
