高信頼システム間連携のための仮想/現実空間連動ブロックチェーン基盤の研究開発
项目来源
项目主持人
项目受资助机构
立项年度
立项时间
项目编号
项目级别
研究期限
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
1.Secure Federated Matrix Factorization via Device-to-Device Model Shuffling
- 关键词:
- Agglomeration;Cryptography;Data accuracy;Data acquisition;Data aggregation;Data collection;Data privacy;Learning systems;Location;Matrix algebra ;Recommender systems;Aggregation methods;Device modelling;Distributed systems;Location based;Location data;Location recommendation;Matrix factorizations;Model shuffling;Privacy concerns;Training epochs
- Sasada, Taisho;Hossain, Delwar;Taenaka, Yuzo;Rahman, Mahbubur;Kadobayashi, Youki
- 《IEEE Access》
- 2025年
- 13卷
- 期
- 期刊
Location-Based Recommendation Systems (LBRS) use device location data to suggest nearby hotels, restaurants, and points of interest. Since directly collecting location data from users can raise privacy concerns, there is growing interest in building recommendation systems based on Federated Learning (FL). Under FL, parameters of recommendation model learned on each user’s device are collected on a single server to build aggregated model. While FL does not raise privacy concerns about data collection since it does not collect user data directly, it may construct unfair models that repeatedly recommend specific locations. Although there are training methods to achieve fair recommendations that prevent such bias, they require more training epochs than usual. In FL, a malicious server can infer the original location data by continuously tracking a specific user’s parameter updates, and the inference accuracy increases proportionally with the number of training epochs. This means that achieving fair location recommendations in FL puts the original data at risk. In this paper, we design a novel parameter aggregation method to build fair and secure FL recommendation models. In the proposed aggregation method, users exchange parameters with each other before model aggregation to prevent malicious servers from inferring the original data. Even if a server (adversary) continuously tracks a specific user’s device, it cannot get parameters from the same user, thus preventing inference of the original location data. An experiment result demonstrated that the proposed method can reduce training time while maintaining the same accuracy as homomorphic encryption approach. © 2013 IEEE.
...2.FedFusion: Adaptive Model Fusion for Addressing Feature Discrepancies in Federated Credit Card Fraud Detection
- 关键词:
- Fraud; Credit cards; Adaptation models; Training; Feature extraction;Federated learning; Long short term memory; Convolutional neuralnetworks; Heterogeneous networks; Credit card fraud; fraud detectionsystem; federated learning; FedFusion; CNN; MLP; LSTM; dataheterogeneity;SMOTE
- Aurna, Nahid Ferdous;Hossain, Md Delwar;Khan, Latifur;Taenaka, Yuzo;Kadobayashi, Youki
- 《IEEE ACCESS》
- 2024年
- 12卷
- 期
- 期刊
The digitization of financial transactions has led to a rise in credit card fraud, necessitating robust measures to secure digital financial systems from fraudsters. Nevertheless, traditional centralized approaches for detecting such frauds, despite their effectiveness, often do not maintain the confidentiality of financial data. Consequently, Federated Learning (FL) has emerged as a promising solution, enabling the secure and private training of models across organizations. However, the practical implementation of FL is challenged by data heterogeneity among institutions, complicating model convergence. To address this issue, we propose FedFusion, which leverages the fusion of local and global models to harness the strengths of both, ensuring convergence even with heterogeneous data with total feature discrepancy. Our approach involves three distinct datasets with completely different feature sets assigned to separate federated clients. Prior to FL training, datasets are preprocessed to select significant features across three deep learning models. The Multilayer Perceptron (MLP), identified as the best-performing model, undergoes personalized training for each dataset. These trained MLP models serve as local models, while the main MLP architecture acts as the global model. FedFusion then adaptively trains all clients, optimizing fusion proportions. Experimental results demonstrate the approach's superiority, achieving detection rates of 99.74%, 99.70%, and 96.61% for clients 1, 2, and 3, respectively. This highlights the effectiveness of FedFusion in addressing data heterogeneity challenges, thereby paving the way for more secure and efficient fraud detection systems in digital finance.
...
