I am interested in allowing the human-likely embodied computational systems to help us in various ways and in various situations. The one way to do that is the systems autonomously interact with humans in human-like ways. And the other is the systems mediate human-human communication to make the interaction go smoother in certain circumstance such as communication between distant places. (There would be a mixed approach of both). I am working on developing theories and mechanisms in both areas, and addressing the impact and benefits of such technologies on education, medical care, distant communication, etc.
We present a novel framework to automatically generate natural gesture motions accompanying speech from audio utterances. Based on a Bi-Directional LSTM Network, our deep network learns speech-gesture relationships with both backward and forward consistencies over a long period of time. Our network regresses a full 3D skeletal pose of a human from perceptual features extracted from the input audio in each time step.
〒064-0926 札幌市中央区南26条西11丁目1-1 北海学園大学工学部山鼻キャンパス２号館８階808号室