Conference Publication Details
Mandatory Fields
Smeaton A.;McGuinness K.;Gurrin C.;Zhou J.;O'Connor N.;Wang P.;Davis B.;Azevedo L.;Freitas A.;Signal L.;Smith M.;Stanley J.;Barr M.;Chambers T.;Mhurchu C.
Iv and L-MM 2016 - Proceedings of the 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, co-located with ACM Multimedia 2016
Semantic indexing of wearable camera images: Kids'Cam concepts
2016
October
Published
1
()
Optional Fields
27
34
© 2016 ACM. In order to provide content-based search on image media, including images and video, they are typically accessed based on manual or automatically assigned concepts or tags, or sometimes based on image-image similarity depending on the use case. While great progress has been made in very recent years in automatic concept detection using machine learning, we are still left with a mis-match between the se mantics of the concepts we can automatically detect, and the semantics of the words used in a user's query, for example. In this paper we report on a large collection of images from wearable cameras gathered as part of the Kids'Cam project, which have been both manually annotated from a vocabulary of 83 concepts, and automatically annotated from a vocabulary of 1,000 concepts. This collection allows us to explore issues around how language, in the form of two distinct concept vocabularies or spaces, one manually assigned and thus forming a ground-truth, is used to represent images, in our case taken using wearable cameras. It also allows us to discuss, in general terms, issues around mis-match of concepts in visual media, which derive from language mismatches. We report the data processing we have completed on this collection and some of our initial experimentation in mapping across the two language vocabularies.
10.1145/2983563.2983566
Grant Details