Please use this identifier to cite or link to this item: http://hdl.handle.net/10316/92463
DC FieldValueLanguage
dc.contributor.authorFaria, Diego R.-
dc.contributor.authorMartins, Ricardo Filipe Alves-
dc.contributor.authorLobo, Jorge-
dc.contributor.authorDias, Jorge-
dc.date.accessioned2021-01-13T19:18:40Z-
dc.date.available2021-01-13T19:18:40Z-
dc.date.issued2012-03-03-
dc.identifier.issn09218890pt
dc.identifier.urihttp://hdl.handle.net/10316/92463-
dc.description.abstractHumans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping.pt
dc.language.isoengpt
dc.rightsembargoedAccesspt
dc.titleExtracting data from human manipulation of objects towards improving autonomous robotic graspingpt
dc.typearticle-
degois.publication.firstPage396pt
degois.publication.lastPage410pt
degois.publication.issue3pt
degois.publication.titleRobotics and Autonomous Systemspt
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S0921889011001527pt
dc.peerreviewedyespt
dc.identifier.doi10.1016/j.robot.2011.07.020pt
degois.publication.volume60pt
dc.date.embargo2012-03-03*
uc.date.periodoEmbargo0pt
item.fulltextCom Texto completo-
item.grantfulltextopen-
item.languageiso639-1en-
crisitem.author.orcid0000-0001-7184-185X-
Appears in Collections:FCTUC Eng.Electrotécnica - Artigos em Revistas Internacionais
Files in This Item:
File Description SizeFormat
full-text.pdffull-text848.19 kBAdobe PDFView/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.