Transfer Learning in Sign Language |
paper |
We build word models for American Sign Language
(ASL) that transfer between different signers and different
aspects. This is advantageous because one could use
large amounts of labelled avatar data in combination with a
smaller amount of labelled human data to spot a large number
of words in human data. Transfer learning is possible
because we represent blocks of video with novel intermediate
discriminative features based on splits of the data. By
constructing the same splits in avatar and human data and
clustering appropriately, our features are both discriminative
and semantically similar: across signers similar features
imply similar words. We demonstrate transfer learning
in two scenarios: from avatar to a frontally viewed human
signer and from an avatar to human signer in a 3/4 view.
Ali Farhadi, David Forsyth, Ryan White, "Transfer Learning in Sign Language," CVPR, 2007. @inproceedings{white2007siggraph,author = {Ali Farhadi and David Forsyth and Ryan White}, title = {Transfer Learning in Sign Language}, booktitle = {Computer Vision and Pattern Recognition}, year = {2007}, } |