アナログ回路の非線形特性を活用した脳型インメモリ計算の開拓
项目来源
项目主持人
项目受资助机构
项目编号
立项年度
立项时间
研究期限
项目级别
受资助金额
学科
学科代码
基金类别
关键词
参与者
参与机构
1.Chaos-based reinforcement learning with TD3
- 关键词:
- Autonomous agents;Computational methods;Deep learning;Learning algorithms;Chaos-based reinforcement learning;Chaotic dynamics;Deterministics;Dynamic drives;Echo state networks;Policy gradient;Reinforcement learning algorithms;Reinforcement learnings;State of the art;TD3
- Matsuki, Toshitaka;Sakemi, Yusuke;Aihara, Kazuyuki
- 《Neural Networks》
- 2026年
- 195卷
- 期
- 期刊
Chaos-based reinforcement learning (CBRL) is a method in which the agent's internal chaotic dynamics drives exploration. However, the learning algorithms in CBRL have not been thoroughly developed in previous studies, nor have they incorporated recent advances in reinforcement learning. This study introduced Twin Delayed Deep Deterministic Policy Gradients (TD3), which is one of the state-of-the-art deep reinforcement learning algorithms that can treat deterministic and continuous action spaces, to CBRL. The validation results provide several insights. First, TD3 works as a learning algorithm for CBRL in a simple goal-reaching task. Second, CBRL agents with TD3 can autonomously suppress their exploratory behavior as learning progresses and resume exploration when the environment changes. Finally, examining the effect of the agent's chaoticity on learning shows that there exists a suitable range of chaos strength in the agent's model to flexibly switch between exploration and exploitation and adapt to environmental changes. © 2025 Elsevier Ltd
...2.Harnessing Nonidealities in Analog In-Memory Computing Circuits: A Physical Modeling Approach for Neuromorphic Systems
- 关键词:
- Computer hardware;Deep neural networks;Energy efficiency;Energy utilization;Green computing;Intelligent systems;Large datasets;Learning systems;SPICE;Timing circuits;Computing circuits;Equation based;In-memory computing;Large-scale network;Neural-networks;Nonideality;Physical modelling;Physical neural network;Spiking neural network
- Sakemi, Yusuke;Okamoto, Yuji;Morie, Takashi;Nobukawa, Sou;Hosomi, Takeo;Aihara, Kazuyuki
- 《Advanced Intelligent Systems》
- 2025年
- 卷
- 期
- 期刊
Large-scale deep learning models are increasingly constrained by their immense energy consumption, which limits their scalability and applicability for edge intelligence. In-memory computing (IMC) offers a promising solution by addressing the von Neumann bottleneck inherent in traditional deep learning accelerators, significantly reducing energy consumption. However, the analog nature of IMC introduces hardware nonidealities that degrade model performance and reliability. This article presents a novel approach to directly train physical models of IMC, formulated as ordinary differential equation (ODE)-based physical neural networks (PNNs). To enable the training of large-scale networks, a technique called differentiable spike-time discretization is proposed, which reduces the computational cost of ODE-based PNNs by up to 20 times in speed and 100 times in memory. Such large-scale networks enhance learning performance by exploiting hardware nonidealities on the CIFAR-10 dataset. The proposed bottom-up methodology is validated through post-layout SPICE simulations on the IMC circuit with nonideal characteristics using the sky130 process. The proposed PNN approach reduces the discrepancy between model behavior and circuit dynamics by at least an order of magnitude. This work paves the way for leveraging nonideal physical devices, such as nonvolatile resistive memories, for energy-efficient deep learning applications. © 2025 The Author(s). Advanced Intelligent Systems published by Wiley-VCH GmbH.
...
