keyboard_arrow_up
Temporal-Sound based User Interface for Smart Home

Authors

Kido Tani and Nobuyuki Umezu, Ibaraki University, Japan

Abstract

We propose a gesture-based interface to control a smart home. Our system replaces existing physical controls with our temporal sound commands using accelerometer. In our preliminary experiments, we recorded the sounds generated by six different gestures (knocking the desk, mouse clicking, and clapping) and converted them into spectrogram images. Classification learning was performed on these images using a CNN. Due to the difference between the microphones used, the classification results are not successful for most of the data. We then recorded acceleration values, instead of sounds, using a smart watch. 5 types of motions were performed in our experiments to execute activity classification on these acceleration data using a machine learning library named Core ML provided by Apple Inc.. These results still have much room to be improved.

Keywords

Smart Home, Sound Categorizing, IoT, machine learning.

Full Text  Volume 11, Number 21