AIGMob:Conditional Generative AI for Fine-grained Urban Mobility Simulation

项目来源

日本学术振兴会基金(JSPS)

项目主持人

姜仁河

项目受资助机构

東京大学

项目编号

24K02996

立项年度

2024

立项时间

未公开

研究期限

未知 / 未知

项目级别

国家级

受资助金额

18590000.00日元

学科

合同審査対象統計科学関連、知能情報学関連;知能情報学関連;統計科学関連

学科代码

未公开

基金类别

基盤研究(B)

关键词

Large Language Model ; Mobility Generation ; Generative AI ; Human Mobility ; Mobility Simulation ; Spatiotemporal Data

参与者

未公开

参与机构

東京大学,空間情報科学研究センター

项目标书摘要:During the first year,we achieved several key research goals ahead of schedule,most notably the successful development and publication of our NeurIPS 2024 paper titled Large Language Models as Urban Residents:An LLM Agent Framework for Personal Mobility Generation.This work demonstrates the feasibility of leveraging LLM-based agents to simulate individual-level urban mobility,an essential step toward achieving the overall objectives of AIGMob.The research not only validates our proposed direction but also opens up new avenues for fine-tuned,scenario-driven mobility simulation.Methodological formulation,data integration,and large-scale experiments have proceeded efficiently,and we are currently in a strong position to deepen model conditioning capabilities and explore multi-agent simulation in subsequent phases.Given this solid foundation,I am highly satisfied with the progress to date.In this year,we introduces a novel approach using Large Language Models(LLMs)integrated into an agent framework for flexible and effective personal mobility generation.LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks.Our approach addresses three research questions:aligning LLMs with real-world urban mobility data,developing reliable activity generation strategies,and exploring LLM applications in urban mobility.The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations,including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation.We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches,demonstrating the effectiveness of our approach and its potential applications in urban mobility.Overall,this study marks the pioneering work of designing an LLM agent framework for activity generation based on realworld human activity data,offering a promising tool for urban mobility analysis.In the next phase of the AIGMob project,I intend to advance the LLM-based agent framework by incorporating fine-grained conditional control.This includes conditioning the generative model on diverse contextual factors such as urban infrastructure,real-time events,and individual-level attributes like daily routines or mobility preferences.By doing so,we aim to generate more realistic and adaptive personal mobility trajectories that better reflect the complex dynamics of urban environments.Furthermore,the project will expand from single-agent to multi-agent simulation,allowing us to capture interactions among residents and simulate city-scale mobility patterns.Alongside model development,we will establish evaluation protocols and benchmark datasets to assess the fidelity,diversity,and controllability of generated data.Collaboration with urban planners and transportation experts will also be pursued to explore real-world applications such as disaster response planning and smart city design.These efforts will drive the project toward its long-term goal of enabling human-centric,AI-driven urban mobility simulation.Reason:During the first year,we achieved several key research goals ahead of schedule,most notably the successful development and publication of our NeurIPS 2024 paper titled Large Language Models as Urban Residents:An LLM Agent Framework for Personal Mobility Generation.This work demonstrates the feasibility of leveraging LLM-based agents to simulate individual-level urban mobility,an essential step toward achieving the overall objectives of AIGMob.The research not only validates our proposed direction but also opens up new avenues for fine-tuned,scenario-driven mobility simulation.Methodological formulation,data integration,and large-scale experiments have proceeded efficiently,and we are currently in a strong position to deepen model conditioning capabilities and explore multi-agent simulation in subsequent phases.Given this solid foundation,I am highly satisfied with the progress to date。Outline of Research at the Start:The goal of this research is to develop conditional generative AI framework for fine-grained urban mobility simulation.This framework will stand as an entirely data-powered pipeline,adeptly producing diverse trajectories based on prevailing traffic demands and specific traffic state conditions。

  • 排序方式:
  • 1
  • /
  • 1.Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting

    • 关键词:
    • ;
    • Gao, Haotian;Jiang, Renhe;Dong, Zheng;Deng, Jinliang;Ma, Yuxin;Song, Xuan
    • 《33rd International Joint Conference on Artificial Intelligence, IJCAI 2024》
    • 2024年
    • August 3, 2024 - August 9, 2024
    • Jeju, Korea, Republic of
    • 会议

    Spatiotemporal forecasting techniques are significant for various domains such as transportation, energy, and weather. Accurate prediction of spatiotemporal series remains challenging due to the complex spatiotemporal heterogeneity. In particular, current end-to-end models are limited by input length and thus often fall into spatiotemporal mirage, i.e., similar input time series followed by dissimilar future values and vice versa. To address these problems, we propose a novel self-supervised pre-training framework Spatial-Temporal-Decoupled Masked Pre-training (STD-MAE) that employs two decoupled masked autoencoders to reconstruct spatiotemporal series along the spatial and temporal dimensions. Rich-context representations learned through such reconstruction could be seamlessly integrated by downstream predictors with arbitrary architectures to augment their performances. A series of quantitative and qualitative evaluations on four widely used benchmarks (PEMS03, PEMS04, PEMS07, and PEMS08) are conducted to validate the state-of-the-art performance of STD-MAE. Codes are available at https://github.com/Jimmy-7664/STD-MAE. © 2024 International Joint Conferences on Artificial Intelligence. All rights reserved.

    ...
  • 排序方式:
  • 1
  • /