Clarification of distance compression and personal space perception in virtual environment using physiological signals

项目来源

日本学术振兴会基金(JSPS)

项目主持人

SRIPIAN PEERAYA

项目受资助机构

芝浦工業大学

立项年度

2023

立项时间

未公开

项目编号

23K11152

研究期限

未知 / 未知

项目级别

国家级

受资助金额

4810000.00日元

学科

知覚情報処理関連

学科代码

未公开

基金类别

基盤研究(C)

关键词

Virtual reality ; Personal space ; Avatars ; AI modulated ; Interpersonal distance ; Distance perception ; Perception ; Physiological Signals

参与者

繁桝博昭

参与机构

芝浦工業大学,工学部;菅谷みどり 芝浦工業大学,工学部;高知工科大学,情報学群

项目标书摘要:The research is progressing rather smoothly.We have completed two major experiments as planned and analyzed the results.Currently,we are preparing two manuscripts:one will be submitted to the journal track of ADADA 2025,and the other to the Journal of Digital Art and Design Association(ADADA Journal).These outputs reflect steady advancement toward our research objectives.In 2024,we conducted two experiments in virtual reality(VR)to explore how various social and perceptual factors influence personal space(PS).The first study measured subjective discomfort and heart rate variability(HRV)when participants were approached or touched by avatars differing in gender and behavior.Results showed that male,human-like avatars caused the highest discomfort,especially during face-touching actions,with stronger reactions among female participants.The second study investigated whether users adjust their PS based on whether the avatar is perceived as AI or human,and whether conversation occurs.While avatar identity had no significant effect,the presence of conversation consistently reduced PS.These findings offer new insights into how social interaction and perception shape spatial behavior in immersive VR environments,contributing to better design for communication in virtual spaces.In the next phase,we plan to conduct additional experiments that combine subjective measures with physiological signal recording(e.g.,HRV)to deepen our understanding of emotional and cognitive responses related to personal space in VR.We also aim to analyze cultural differences further.Based on these expanded datasets,we plan to prepare and submit additional journal papers to strengthen the academic contribution of this research.Reason:The research is progressing rather smoothly.We have completed two major experiments as planned and analyzed the results.Currently,we are preparing two manuscripts:one will be submitted to the journal track of ADADA 2025,and the other to the Journal of Digital Art and Design Association(ADADA Journal).These outputs reflect steady advancement toward our research objectives。Outline of Research at the Start:This study explores personal space perception in virtual reality(VR)and its impact on discomfort feeling through physiological signals,informing design of metaverse platforms and human digital twin computing。

  • 排序方式:
  • 1
  • /
  • 1.Magnification effects on perspective angle and optical slant angle across locations

    • 关键词:
    • perception; spatial vision; depth; binocular vision;DISTANCE PERCEPTION; EGOCENTRIC DISTANCE; VERTICAL DISPARITY; VIEWINGCONDITIONS; EXPANSION; SIZE
    • Sripian, Peeraya;Ijiri, Takashi;Yamaguchi, Yasushi
    • 《I-PERCEPTION》
    • 2025年
    • 16卷
    • 4期
    • 期刊

    This study investigates the phenomenon of magnification illusion, where the perception of perspective and optical slant angles changes when a scene is magnified. Our findings indicate that magnification influences these angles differently depending on location, which suggests that the illusion might be caused by changes in three-dimensional (3D) interpretation. Our findings reveal that the change of perspective angle interpretation primarily occurred when the stimuli were on the ground and sidewall but not those on the ceiling. Specifically, stimuli on the ceiling exhibit a significant underestimation of optical slant angles, while the perspective angle remains relatively stable. We developed a mathematical model based on the hypothesis of changes in 3D interpretation, which aligns well with our data. It was found that the interpretation of the perspective angle and the optical slant angle changes when a scene is magnified as indicated by the proposed relationship. This research provides the characteristics underlying spatial perception and its alteration under magnification and relative location, with potential applications in virtual reality and augmented reality system designs.

    ...
  • 2.Multimodal Deep Learning for Remote Stress Estimation Using CCT-LSTM

    • 关键词:
    • Classification (of information);Human robot interaction;Pattern recognition;Pipelines;Remote sensing;Statistical tests;Biomedical / healthcare / medicine;Cross validation;F1 scores;Multi-modal;Multilevels;Remote stress;Remote-sensing;Stress classifications;Stress estimation;Task classification
    • Ziaratnia, Sayyedjavad;Laohakangvalvit, Tipporn;Sugaya, Midori;Sripian, Peeraya
    • 《2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024》
    • 2024年
    • January 4, 2024 - January 8, 2024
    • Waikoloa, HI, United states
    • 会议

    Stress estimation is key to the early detection and mitigation of health problems, enhancing driving safety through driver stress monitoring, and improving human-robot interaction efficiency by adapting to user's stress levels. In this paper, we present a novel method for video-based remote stress estimation and categorization, which involves two separate experiments: one for stress task classification and another for multilevel stress classification. The method combines two deep learning approaches, the Compact Convolutional Transformer (CCT) and Long Short-Term Memory (LSTM), to form a CCT-LSTM pipeline. For each modality (facial expression and rPPG), a CCT model is used to extract features, followed by an LSTM block for temporal pattern recognition. In stress task classification, T1, T2, and T3 tasks from the UBFC-Phys dataset are used, utilizing sevenfold cross-validation. The results indicated a mean accuracy of 83.2% and an F1 score of 83.4%. For multilevel stress classification, the control (lower stress) and test (higher stress) groups from the same dataset were used with fivefold cross-validation, achieving a mean accuracy of 80.5% and an F1 score of 80.3%. The results suggest that our proposed model surpasses existing stress estimation methods by effectively using multimodal deep learning and the CCT-LSTM pipeline for precise, non-invasive stress detection and categorization, with applications in health monitoring, safety, and interactive technologies. © 2024 IEEE.

    ...
  • 排序方式:
  • 1
  • /