keyboard_arrow_up
Accelerated Bayesian Optimization for Deep Learning

Authors

Ayahiko Niimi and Kousuke Sakamoto, Future University Hakodate, Japan

Abstract

Bayesian optimization for deep learning has extensive execution time because it involves several calculations and parameters. To solve this problem, this study aims at accelerating the execution time by focusing on the output of the activation function that is strongly related to accuracy. We developed a technique to accelerate the execution time by stopping the learning model so that the activation function of the first and second layers would become zero. Two experiments were conducted to confirm the effectiveness of the proposed method. First, we implemented the proposed technique and compared its execution time with that of Bayesian optimization. We successfully accelerated the execution time of Bayesian optimization for deep learning. Second, we attempted to apply the proposed method for credit card transaction data. From these experiments, it was confirmed that the purpose of our study was achieved. In particular, we concluded that the proposed method can accelerate the execution time when deep learning is applied to an extremely large amount of data.

Keywords

Deep Learning, Bayesian Optimization, Activation Function, Real Dataset

Full Text  Volume 7, Number 13