Human-Object-Object-Interaction Affordance
Document Type
Conference Proceeding
Publication Date
1-2013
Keywords
object recognition, belief networks, image motion analysis, learning (artificial intelligence), trained network, human-object-object-interaction affordance learning approach, HOO, motion models, object recognition reliability, paired objects, humans actions, object labels, Bayesian network
Digital Object Identifier (DOI)
https://doi.org/10.1109/WORV.2013.6521912
Abstract
This paper presents a novel human-object-object (HOO) interaction affordance learning approach that models the interaction motions between paired objects in a human-object-object way and use the motion models to improve the object recognition reliability. The innate interaction-affordance knowledge of the paired objects is modeled from a set of labeled training data that contains relative motions of the paired objects, humans actions, and object labels. The learned knowledge of the pair relationship is represented with a Bayesian Network and the trained network is used to improve recognition reliability of the objects.
Was this content written or created while at USF?
Yes
Citation / Publisher Attribution
2013 IEEE Workshop on Robot Vision (WORV), Clearwater Beach, FL, 2013, p. 1-6.
Scholar Commons Citation
Ren, Shaogang and Sun, Yu, "Human-Object-Object-Interaction Affordance" (2013). Computer Science and Engineering Faculty Publications. 84.
https://digitalcommons.usf.edu/esb_facpub/84