Energy-efficient hardware design of artificial neural networks for mobile platforms

thumbnail.default.alt
Tarih
2023-03-09
Yazarlar
Karadeniz, Mahmut Burak
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Graduate School
Özet
Deep Neural Networks (DNNs), which have recently improved in accuracy and usefulness, are becoming more and more common in autonomous systems and diagnostic tools. These enhancements cost money, though. DNNs' exponential increase in energy consumption necessitates the development of novel methods for enhancing their energy effectiveness. Modern approaches to energy optimization combine the traditional computing paradigm with a variety of performance enhancement strategies. Memory partitioning, spatial mapping, energy-efficient multiplication, weight and input precision optimization, bit-serial computation, and MAC-based processing element management are a few of these methods. Although these strategies help with the energy crisis to some level, their complexity of use negates any benefits. An energy-efficiency solution can be using unary number system which simplifies arithmetic operations of the hardware processor such as multiplication and addition. However this representation has certain drawbacks for the hardware processor such as having shortage of rich random sources and latency problem. A real-time stochastic signal generator called STAMP is built to overcome the issues. STAMP has features of low hardware cost and generates high quality of random stochastic bit streams at high speeds in unary format. A new hybrid bit serial-parallel most significant bit (MSB-first) number representation is proposed, which is different from traditional techniques. Finding a number system that enables each parallel or serial line of the number, designated by m and n, to be handled separately or independently is the driving force behind the new number representation. The hardware space won't change with n and will only depend on m if the serial lines can run independently. If they do, the same hardware can be used repeatedly for each serial line. For use in DNNs, a brand-new hybrid processor dubbed TALIPOT is being proposed. When the desired accuracy is achieved, TALIPOT optimizes operational accuracy/energy point by chopping out bits at the output. Simulations using the MNIST and CIFAR-10 datasets show that TALIPOT outperforms the state-of-the-art computation techniques in terms of energy consumption. After developing TALIPOT, a computer aided design tool called TAHA is built to employ TALIPOT easily and efficiently on DNNs. TAHA presents an interface and complete guide for the users from training, testing and optimizing DNN hardware until prototyping it into SoC efficiently. Utilizing the algorithm/hardware cooperation and integrating TALIPOT hybrid processor, TAHA can readily offer a number of optimized DNN hardware deployment solutions for the user to select the optimal hardware configuration which maximizes the energy saving under accuracy constraint.
Açıklama
Thesis(Ph.D.) -- Istanbul Technical University, Graduate School, 2023
Anahtar kelimeler
Deep Neural Networks, Derin Sinir Ağları, sinir ağları, neural networks
Alıntı