分散配置データを用いた深層学習のための鍵管理不要な学習可能暗号化法の構築
项目来源
项目主持人
项目受资助机构
项目编号
立项年度
立项时间
项目级别
研究期限
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
1.Effective Fine-Tuning of Vision Transformers with Low-Rank Adaptation for Privacy-Preserving Image Classification
- 关键词:
- Classification (of information);Computer vision;Privacy-preserving techniques;Tuning;Adaptation methods;Decomposition matrix;Embeddings;Fine tuning;Images classification;Low-rank adaptation;Model weights;Privacy preserving;Transformer modeling;Vision transformer
- Lin, Haiwei;Imaizumi, Shoko;Kiya, Hitoshi
- 《14th IEEE Global Conference on Consumer Electronics, GCCE 2025》
- 2025年
- September 23, 2025 - September 26, 2025
- Osaka, Japan
- 会议
We propose a low-rank adaptation method for training privacy-preserving vision transformer (ViT) models that efficiently freezes pre-trained ViT model weights. In the proposed method, trainable rank decomposition matrices are injected into each layer of the ViT architecture, and moreover, the patch embedding layer is not frozen, unlike in the case of the conventional low-rank adaptation methods. The proposed method allows us not only to reduce the number of trainable parameters but to also maintain almost the same accuracy as that of full-time tuning. © 2025 IEEE.
...2.Learnable Image Encryption Without Key Management for Privacy-Preserving Vision Transformer
- 关键词:
- Encryption; Cryptography; Training; Accuracy; Transformers; Computervision; Visualization; Image classification; Privacy; Data models;Vision transformer; image encryption; privacy preserving;SECRET KEY; SOLVER
- Hirose, Mare;Imaizumi, Shoko;Kiya, Hitoshi
- 《IEEE ACCESS》
- 2025年
- 13卷
- 期
- 期刊
We propose a privacy-preserving image classification method based on perceptual encryption that does not require centralized key management. In the proposed method, each client independently generates an encryption key to protect visual information in both training and query images. The use of independent keys allows multiple clients to use a shared model without exchanging keys and to easily update their keys whenever needed. In addition, even if a key is compromised, the impact does not propagate to other clients. The use of perceptual encryption allows us to directly apply encrypted data for training and query images in the encrypted domain, but conventional approaches with perceptual encryption are known to degrade the accuracy of image classification when independent keys are used in each client due to significant visual distortion caused by encryption. Accordingly, we demonstrate that a novel method that focuses on the compatibility between block-wise image encryption and the embedding structure of vision transformer (ViT) is effective in improving the issue. We carried out experiments to demonstrate the effectiveness of the method in terms of accuracy and robustness on CIFAR-10 and Tiny ImageNet. Compared to conventional methods, when using independent keys, the accuracy was improved by 82% for CIFAR-10 and 83% for Tiny ImageNet. In addition, resistance to various attacks including brute-force attacks and jigsaw puzzle attacks was demonstrated under the assumption of ciphertext-only attacks. These results suggest the practicality and effectiveness of the method for secure image classification in real-world multi-client environments.
...3.A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique
- 关键词:
- Computer vision;Privacy-preserving techniques;Semantic Segmentation;Semantic Web;Adaptation techniques;Domain adaptation;Images encryptions;Model training;Perceptual encryptions;Privacy preserving;Segmentation methods;Semantic segmentation;Test images;Vision transformer
- Sueyoshi, Homare;Nishikawa, Kiyoshi;Kiya, Hitoshi
- 《14th IEEE Global Conference on Consumer Electronics, GCCE 2025》
- 2025年
- September 23, 2025 - September 26, 2025
- Osaka, Japan
- 会议
We propose a privacy-preserving semantic-segmentation method for applying perceptual encryption to images used for model training in addition to test images. This method also provides almost the same accuracy as models without any encryption. The above performance is achieved using a domain-adaptation technique on the embedding structure of the Vision Transformer (ViT). The effectiveness of the proposed method was experimentally confirmed in terms of the accuracy of semantic segmentation when using a powerful semantic-segmentation model with ViT called Segmentation Transformer. © 2025 IEEE.
...
