Enhancing Sense of Embodiment through Automated Rubber Hand Illusion and Its Assessment

项目来源

日本学术振兴会基金(JSPS)

项目主持人

Perusquia・Hernandez Monica

项目受资助机构

奈良先端科学技術大学院大学

项目编号

25K21250

立项年度

2025

立项时间

未公开

研究期限

未知 / 未知

项目级别

国家级

受资助金额

4810000.00日元

学科

ヒューマンインタフェースおよびインタラクション関連

学科代码

未公开

基金类别

若手研究

关键词

affective computing ; embodiment ; electrical stimulation ; biosignal processing ; virtual reality

参与者

未公开

参与机构

奈良先端科学技術大学院大学,先端科学技術研究科

项目标书摘要:Outline of Research at the Start:The Rubber Hand Illusion(RHI)upholds the hope that a seamless,full-body embodiment in virtual avatars will be possible in the future.Nevertheless,its implementation still requires manual work from an experimenter synchronously stroking a real and a fake hand,and there is no reliable,objective assessment to quantify its successful induction.Therefore,I propose to implement an automatic biomarker assessment of the successful induction of the RHI,and use Electrical Stimulation to automate it。

  • 排序方式:
  • 1
  • /
  • 1.Large Language Models as Perceivers of Dynamic Full-Body Expressions of Emotion

    • 关键词:
    • Behavioral research;Human computer interaction;Affective Computing;Body motions;Full body;Human emotion;Human judgments;Human motions;Human perception;Language model;Large language model;Motion description
    • Liu, Huakun;Cheng, Miao;Wei, Xin;Dollack, Felix;Schneider, Victor;Uchiyama, Hideaki;Kitamura, Yoshifumi;Kiyokawa, Kiyoshi;Perusquia-Hernandez, Monica
    • 《27th International Conference on Multimodal Interaction, ICMI 2025》
    • 2025年
    • October 13, 2025 - October 17, 2025
    • Canberra, ACT, Australia
    • 会议

    Human emotion expressed through body motions is often interpreted differently by different perceivers. We explored the use of large language models (LLMs) to simulate humans’ judgment variability and model the distribution of perceived emotions from body motion. Given textual motion descriptions derived from a multiactor dataset covering diverse emotional contexts, we prompted LLMs to generate emotion probability distributions. We compared LLM-generated outputs with human perception distributions and performer-intended emotion labels. The model showed strong perceptual alignment with human perceivers and achieved an accuracy of 63.2% in identifying the intended emotions, comparable to that of human perceivers. We further investigated the effects of context and scoring constraints, showing that both factors influence the scoring of the model. Our findings suggest that LLMs can serve as effective proxies for modeling emotion perception distributions from motions, supporting scalable evaluation and generation of expressive behaviors grounded in human perception. © 2025 Copyright held by the owner/author(s)

    ...
  • 排序方式:
  • 1
  • /