Enhancing efficiency and privacy of federated learning systems for IoT applications

项目来源

日本学术振兴会基金(JSPS)

项目主持人

櫻井 幸一

项目受资助机构

九州大学

立项年度

2024

立项时间

未公开

项目编号

24KF0065

研究期限

未知 / 未知

项目级别

国家级

受资助金额

2000000.00日元

学科

情報セキュリティ関連

学科代码

未公开

基金类别

特別研究員奨励費

关键词

未公开

参与者

LIAN ZHUOTAO

参与机构

未公开

项目标书摘要:Outline of Research at the Start:本研究は、IoT環境におけるデータのプライバシー問題に対応するため、データをローカルに処理し結果のみを共有する連合学習システムを開発する。これは、個人情報の漏洩リスクを減らし、データの有効活用を可能にする。具体的には、通信の効率を高め、プライバシー保護をさらに強化する技術とメカニズムを導入することを計画している。この研究により、医療、交通、都市計画などの分野で安全なデータ利用が期待される。

  • 排序方式:
  • 1
  • /
  • 1.POSTER: AI-Based Physical Layer Key Generation Mechanism

    • 关键词:
    • Cryptography;Deep learning;Feature extraction;Internet of things;Network layers;Network security;Physical layer;Signal processing;CryptoGraphics;Features fusions;Features selection;Intelligent feature selection;Key generation;Multi-source feature fusion;Multi-Sources;Physical layer key generation;Physical layers;Source features
    • Zhao, Hong;Lian, Zhuotao;Wang, Xinsheng;Guo, Enting
    • 《19th International Conference on Provable and Practical Security, ProvSec 2025》
    • 2026年
    • October 10, 2025 - October 12, 2025
    • Yokohama, Japan
    • 会议

    With the rapid proliferation of wireless devices, ensuring secure communication over open wireless channels has become increasingly critical. Physical layer key generation has emerged as a lightweight and information-theoretically secure cryptographic mechanism, offering a promising complement to traditional cryptographic approaches. However, in Internet of Things (IoT) scenarios characterized by heterogeneous devices and highly dynamic environments, conventional physical-layer key generation methods suffer from low key generation rates, poor stability, and limited adaptability. To address these challenges, this paper proposes a novel AI-based physical layer key generation framework that integrates multi-source signal fusion and intelligent feature selection. Specifically, deep learning models are employed to fuse features from multiple heterogeneous wireless signal sources, enabling the extraction of high-quality randomness and enhancing key entropy and security. In parallel, an attention mechanism is employed to dynamically select the most suitable physical layer features based on real-time environmental conditions, thereby enhancing the system’s adaptability and robustness in complex IoT settings. Finally, we outline potential future research directions and discuss the feasibility of implementing the proposed framework in real-world deployments. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.

    ...
  • 2.POSTER: A Server-Side Proactive Defense Framework forPoison-Resilient Federated Learning

    • 关键词:
    • Collaborative learning;Data privacy;Distributed computer systems;Learning systems;Network security;Anomaly detection;Collaborative modeling;Dual mechanisms;Machine unlearning;Model training;Poisoning attacks;Proactive defense;Proactive layer;Real-time protection;Server sides
    • Zeng, Qingkui;Lian, Zhuotao
    • 《19th International Conference on Provable and Practical Security, ProvSec 2025》
    • 2026年
    • October 10, 2025 - October 12, 2025
    • Yokohama, Japan
    • 会议

    Federated Learning (FL) enables collaborative model training without sharing raw data, but remains vulnerable to poisoning attacks from malicious clients. Existing defenses are often reactive and require costly model retraining, making them inefficient and impractical for real-time protection. We propose FedCleaner, a server-side dual-mechanism framework that combines: Proactive Layer-Wise Anomaly Detection to identify poisoned updates in real time; Retroactive Contribution Erasure to efficiently unlearn malicious client influences without retraining. Experiments on datasets show that FedCleaner provides a scalable, privacy-preserving, and regulation-compliant solution to defend FL systems against persistent poisoning threats. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.

    ...
  • 3.Privacy-Preserving LLM Agent for Multi-modal Health Monitoring

    • 关键词:
    • Medical computing;Modal analysis;Patient monitoring;Sensitive data;Health monitoring;Ho-momorphic encryptions;Homomorphic-encryptions;Language model;Large language model;Model agents;Multi-modal;Privacy;Privacy concerns;Privacy preserving
    • Xie, Qipeng;Wu, Jiafei;Wang, Weiyu;Lian, Zhuotao;Yuan, Mu;Shuai, Xian;Wang, Weizheng;Haoyi, Yuan;Hu, Haibo;Wu, Kaishun
    • 《19th International Conference on Provable and Practical Security, ProvSec 2025》
    • 2026年
    • October 10, 2025 - October 12, 2025
    • Yokohama, Japan
    • 会议

    Tool-using LLM agents for health monitoring raise critical privacy concerns as they share sensitive patient data with cloud providers and third-party models. This study presents HealthAgent, a privacy-preserving LLM agent framework that protects both user queries and multi-modal sensor data through homomorphic encryption. HealthAgent enables an LLM orchestrator to coordinate specialized AI models for complex health assessments while processing all data in encrypted form. The system achieves 95% task decomposition accuracy with 10s latency, demonstrating that strong privacy guarantees can be maintained without sacrificing real-time performance in health monitoring applications. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.

    ...
  • 4.POSTER: Tricking LLM-Based NPCs into Spilling Secrets

    • 关键词:
    • Modeling languages;Speech processing;Dialogue systems;Game security;Language model;Large language model;Model-based OPC;NPC dialog system;Prompt injection
    • Shiomi, Kyohei;Lian, Zhuotao;Nakanishi, Toru;Kitasuka, Teruaki
    • 《19th International Conference on Provable and Practical Security, ProvSec 2025》
    • 2026年
    • October 10, 2025 - October 12, 2025
    • Yokohama, Japan
    • 会议

    Large Language Models (LLMs) are increasingly used to generate dynamic dialogue for game NPCs. However, their integration raises new security concerns. In this study, we examine whether adversarial prompt injection can cause LLM-based NPCs to reveal hidden background secrets that are meant to remain undisclosed. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.

    ...
  • 5.AggreMark: Efficient Watermarking in Federated Learning via Weighted Averaging

    • 关键词:
    • Adversarial machine learning;Contrastive Learning;Differential privacy;Backdoors;Distributed machine learning;Intellectual property rights;Original model;Parallel servers;Privacy preserving;Server sides;Watermark embedding;Watermarking algorithms;Weighted averaging
    • Lian, Zhuotao;Wang, Weiyu;Su, Chunhua;Sakurai, Kouichi
    • 《IEEE Congress on Cybermatics: 17th IEEE International Conference on Internet of Things, iThings 2024, 20th IEEE International Conference on Green Computing and Communications, GreenCom 2024, 17th IEEE International Conference on Cyber, Physical and Social Computing, CPSCom 2024, 10th IEEE International Conference on Smart Data, SmartData 2024》
    • 2024年
    • August 19, 2024 - August 22, 2024
    • Copenhagen, Denmark
    • 会议

    Federated learning has rapidly advanced as a privacy-preserving, distributed machine learning methodology. Protecting the intellectual property rights of federated models, however, poses significant challenges. Existing backdoor-based watermarking techniques suffer from complexity, inefficiency, and implementation challenges. In this paper, we present AggreMark, a novel watermarking approach that incorporates parallel server-side training and watermark embedding during the weighted averaging phase of federated learning. This method does not alter the original model structure or parameters related to the original learning task and adds no additional time overhead, ensuring its practicality for real-world applications. AggreMark is easy to implement and maintain, and it robustly preserves the model's high performance. We validate the method's effectiveness and robustness against pruning-based watermark removal attacks through experiments on CIFAR-10. © 2024 IEEE.

    ...
  • 排序方式:
  • 1
  • /