Enhancing Sense of Embodiment through Automated Rubber Hand Illusion and Its Assessment

项目来源

日本学术振兴会基金(JSPS)

项目主持人

Perusquia・Hernandez Monica

项目受资助机构

奈良先端科学技術大学院大学

项目编号

25K21250

立项年度

2025

立项时间

未公开

研究期限

未知 / 未知

项目级别

国家级

受资助金额

4810000.00日元

学科

ヒューマンインタフェースおよびインタラクション関連

学科代码

未公开

基金类别

若手研究

关键词

affective computing ; embodiment ; electrical stimulation ; biosignal processing ; virtual reality

参与者

未公开

参与机构

奈良先端科学技術大学院大学,先端科学技術研究科

项目标书摘要:Outline of Research at the Start:The Rubber Hand Illusion(RHI)upholds the hope that a seamless,full-body embodiment in virtual avatars will be possible in the future.Nevertheless,its implementation still requires manual work from an experimenter synchronously stroking a real and a fake hand,and there is no reliable,objective assessment to quantify its successful induction.Therefore,I propose to implement an automatic biomarker assessment of the successful induction of the RHI,and use Electrical Stimulation to automate it。

  • 排序方式:
  • 1
  • /
  • 1.Electro-Oculography and Proprioceptive Calibration Enable Horizontal and Vertical Gaze Estimation, Even with Eyes Closed

    • 关键词:
    • electro-oculography; signal processing; eyes closed; gaze directionestimation;MOVEMENTS; PERFORMANCE; TRACKING; SLEEP
    • Wei, Xin;Dollack, Felix;Kiyokawa, Kiyoshi;Perusquia-Hernandez, Monica
    • 《SENSORS》
    • 2025年
    • 25卷
    • 21期
    • 期刊

    Eye movement is an important tool used to investigate cognition. It also serves as input in human-computer interfaces for assistive technology. It can be measured with camera-based eye tracking and electro-oculography (EOG). EOG does not rely on eye visibility and can be measured even when the eyes are closed. We investigated the feasibility of detecting the gaze direction using EOG while having the eyes closed. A total of 15 participants performed a proprioceptive calibration task with open and closed eyes, while their eye movement was recorded with a camera-based eye tracker and with EOG. The calibration was guided by the participants' hand motions following a pattern of felt dots on cardboard. Our cross-correlation analysis revealed reliable temporal synchronization between gaze-related signals and the instructed trajectory across all conditions. Statistical comparison tests and equivalence tests demonstrated that EOG tracking was statistically equivalent to the camera-based eye tracker gaze direction during the eyes-open condition. The camera-based eye-tracking glasses do not support tracking with closed eyes. Therefore, we evaluated the EOG-based gaze estimates during the eyes-closed trials by comparing them to the instructed trajectory. The results showed that EOG signals, guided by proprioceptive cues, followed the instructed path and achieved a significantly greater accuracy than shuffled control data, which represented a chance-level performance. This demonstrates the advantage of EOG when camera-based eye tracking is infeasible, and it paves the way for the development of eye-movement input interfaces for blind people, research on eye movement direction when the eyes are closed, and the early detection of diseases.

    ...
  • 2.Large Language Models as Perceivers of Dynamic Full-Body Expressions of Emotion

    • 关键词:
    • Behavioral research;Human computer interaction;Affective Computing;Body motions;Full body;Human emotion;Human judgments;Human motions;Human perception;Language model;Large language model;Motion description
    • Liu, Huakun;Cheng, Miao;Wei, Xin;Dollack, Felix;Schneider, Victor;Uchiyama, Hideaki;Kitamura, Yoshifumi;Kiyokawa, Kiyoshi;Perusquia-Hernandez, Monica
    • 《27th International Conference on Multimodal Interaction, ICMI 2025》
    • 2025年
    • October 13, 2025 - October 17, 2025
    • Canberra, ACT, Australia
    • 会议

    Human emotion expressed through body motions is often interpreted differently by different perceivers. We explored the use of large language models (LLMs) to simulate humans’ judgment variability and model the distribution of perceived emotions from body motion. Given textual motion descriptions derived from a multiactor dataset covering diverse emotional contexts, we prompted LLMs to generate emotion probability distributions. We compared LLM-generated outputs with human perception distributions and performer-intended emotion labels. The model showed strong perceptual alignment with human perceivers and achieved an accuracy of 63.2% in identifying the intended emotions, comparable to that of human perceivers. We further investigated the effects of context and scoring constraints, showing that both factors influence the scoring of the model. Our findings suggest that LLMs can serve as effective proxies for modeling emotion perception distributions from motions, supporting scalable evaluation and generation of expressive behaviors grounded in human perception. © 2025 Copyright held by the owner/author(s)

    ...
  • 排序方式:
  • 1
  • /