分散配置データを用いた深層学習のための鍵管理不要な学習可能暗号化法の構築
项目来源
项目主持人
项目受资助机构
项目编号
立项年度
立项时间
项目级别
研究期限
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
1.Effective Fine-Tuning of Vision Transformers with Low-Rank Adaptation for Privacy-Preserving Image Classification
- 关键词:
- Classification (of information);Computer vision;Privacy-preserving techniques;Tuning;Adaptation methods;Decomposition matrix;Embeddings;Fine tuning;Images classification;Low-rank adaptation;Model weights;Privacy preserving;Transformer modeling;Vision transformer
- Lin, Haiwei;Imaizumi, Shoko;Kiya, Hitoshi
- 《14th IEEE Global Conference on Consumer Electronics, GCCE 2025》
- 2025年
- September 23, 2025 - September 26, 2025
- Osaka, Japan
- 会议
We propose a low-rank adaptation method for training privacy-preserving vision transformer (ViT) models that efficiently freezes pre-trained ViT model weights. In the proposed method, trainable rank decomposition matrices are injected into each layer of the ViT architecture, and moreover, the patch embedding layer is not frozen, unlike in the case of the conventional low-rank adaptation methods. The proposed method allows us not only to reduce the number of trainable parameters but to also maintain almost the same accuracy as that of full-time tuning. © 2025 IEEE.
...2.A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique
- 关键词:
- Computer vision;Privacy-preserving techniques;Semantic Segmentation;Semantic Web;Adaptation techniques;Domain adaptation;Images encryptions;Model training;Perceptual encryptions;Privacy preserving;Segmentation methods;Semantic segmentation;Test images;Vision transformer
- Sueyoshi, Homare;Nishikawa, Kiyoshi;Kiya, Hitoshi
- 《14th IEEE Global Conference on Consumer Electronics, GCCE 2025》
- 2025年
- September 23, 2025 - September 26, 2025
- Osaka, Japan
- 会议
We propose a privacy-preserving semantic-segmentation method for applying perceptual encryption to images used for model training in addition to test images. This method also provides almost the same accuracy as models without any encryption. The above performance is achieved using a domain-adaptation technique on the embedding structure of the Vision Transformer (ViT). The effectiveness of the proposed method was experimentally confirmed in terms of the accuracy of semantic segmentation when using a powerful semantic-segmentation model with ViT called Segmentation Transformer. © 2025 IEEE.
...
